A Simple Estimation Method for Aggregate Government Outsourcing
ERIC Educational Resources Information Center
Minicucci, Stephen; Donahue, John D.
2004-01-01
The scholarly and popular debate on the delegation to the private sector of governmental tasks rests on an inadequate empirical foundation, as no systematic data are collected on direct versus indirect service delivery. We offer a simple method for approximating levels of service outsourcing, based on relatively straightforward combinations of and…
Improved Design of Tunnel Supports : Executive Summary
DOT National Transportation Integrated Search
1979-12-01
This report focuses on improvement of design methodologies related to the ground-structure interaction in tunneling. The design methods range from simple analytical and empirical methods to sophisticated finite element techniques as well as an evalua...
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
A Simple Computer-Aided Three-Dimensional Molecular Modeling for the Octant Rule
ERIC Educational Resources Information Center
Kang, Yinan; Kang, Fu-An
2011-01-01
The Moffitt-Woodward-Moscowitz-Klyne-Djerassi octant rule is one of the most successful empirical rules in organic chemistry. However, the lack of a simple effective modeling method for the octant rule in the past 50 years has posed constant difficulties for researchers, teachers, and students, particularly the young generations, to learn and…
A simple method for the extraction and identification of light density microplastics from soil.
Zhang, Shaoliang; Yang, Xiaomei; Gertsen, Hennie; Peters, Piet; Salánki, Tamás; Geissen, Violette
2018-03-01
This article introduces a simple and cost-saving method developed to extract, distinguish and quantify light density microplastics of polyethylene (PE) and polypropylene (PP) in soil. A floatation method using distilled water was used to extract the light density microplastics from soil samples. Microplastics and impurities were identified using a heating method (3-5s at 130°C). The number and size of particles were determined using a camera (Leica DFC 425) connected to a microscope (Leica wild M3C, Type S, simple light, 6.4×). Quantification of the microplastics was conducted using a developed model. Results showed that the floatation method was effective in extracting microplastics from soils, with recovery rates of approximately 90%. After being exposed to heat, the microplastics in the soil samples melted and were transformed into circular transparent particles while other impurities, such as organic matter and silicates were not changed by the heat. Regression analysis of microplastics weight and particle volume (a calculation based on image J software analysis) after heating showed the best fit (y=1.14x+0.46, R 2 =99%, p<0.001). Recovery rates based on the empirical model method were >80%. Results from field samples collected from North-western China prove that our method of repetitive floatation and heating can be used to extract, distinguish and quantify light density polyethylene microplastics in soils. Microplastics mass can be evaluated using the empirical model. Copyright © 2017 Elsevier B.V. All rights reserved.
Bayesian Analysis of Longitudinal Data Using Growth Curve Models
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.
2007-01-01
Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…
SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.
Mueller, Charles S.
1985-01-01
Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.
An evaluation of rise time characterization and prediction methods
NASA Technical Reports Server (NTRS)
Robinson, Leick D.
1994-01-01
One common method of extrapolating sonic boom waveforms from aircraft to ground is to calculate the nonlinear distortion, and then add a rise time to each shock by a simple empirical rule. One common rule is the '3 over P' rule which calculates the rise time in milliseconds as three divided by the shock amplitude in psf. This rule was compared with the results of ZEPHYRUS, a comprehensive algorithm which calculates sonic boom propagation and extrapolation with the combined effects of nonlinearity, attenuation, dispersion, geometric spreading, and refraction in a stratified atmosphere. It is shown there that the simple empirical rule considerably overestimates the rise time estimate. In addition, the empirical rule does not account for variations in the rise time due to humidity variation or propagation history. It is also demonstrated that the rise time is only an approximate indicator of perceived loudness. Three waveforms with identical characteristics (shock placement, amplitude, and rise time), but with different shock shapes, are shown to give different calculated loudness. This paper is based in part on work performed at the Applied Research Laboratories, the University of Texas at Austin, and supported by NASA Langley.
NASA Astrophysics Data System (ADS)
Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May
2014-11-01
Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Regression Analysis by Example. 5th Edition
ERIC Educational Resources Information Center
Chatterjee, Samprit; Hadi, Ali S.
2012-01-01
Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…
NASA Technical Reports Server (NTRS)
Campbell, John P; Mckinney, Marion O
1952-01-01
A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.
Empirical entropic contributions in computational docking: evaluation in APS reductase complexes.
Chang, Max W; Belew, Richard K; Carroll, Kate S; Olson, Arthur J; Goodsell, David S
2008-08-01
The results from reiterated docking experiments may be used to evaluate an empirical vibrational entropy of binding in ligand-protein complexes. We have tested several methods for evaluating the vibrational contribution to binding of 22 nucleotide analogues to the enzyme APS reductase. These include two cluster size methods that measure the probability of finding a particular conformation, a method that estimates the extent of the local energetic well by looking at the scatter of conformations within clustered results, and an RMSD-based method that uses the overall scatter and clustering of all conformations. We have also directly characterized the local energy landscape by randomly sampling around docked conformations. The simple cluster size method shows the best performance, improving the identification of correct conformations in multiple docking experiments. 2008 Wiley Periodicals, Inc.
Evaluation of an empirical monitor output estimation in carbon ion radiotherapy.
Matsumura, Akihiko; Yusa, Ken; Kanai, Tatsuaki; Mizota, Manabu; Ohno, Tatsuya; Nakano, Takashi
2015-09-01
A conventional broad beam method is applied to carbon ion radiotherapy at Gunma University Heavy Ion Medical Center. According to this method, accelerated carbon ions are scattered by various beam line devices to form 3D dose distribution. The physical dose per monitor unit (d/MU) at the isocenter, therefore, depends on beam line parameters and should be calibrated by a measurement in clinical practice. This study aims to develop a calculation algorithm for d/MU using beam line parameters. Two major factors, the range shifter dependence and the field aperture effect, are measured via PinPoint chamber in a water phantom, which is an identical setup as that used for monitor calibration in clinical practice. An empirical monitor calibration method based on measurement results is developed using a simple algorithm utilizing a linear function and a double Gaussian pencil beam distribution to express the range shifter dependence and the field aperture effect. The range shifter dependence and the field aperture effect are evaluated to have errors of 0.2% and 0.5%, respectively. The proposed method has successfully estimated d/MU with a difference of less than 1% with respect to the measurement results. Taking the measurement deviation of about 0.3% into account, this result is sufficiently accurate for clinical applications. An empirical procedure to estimate d/MU with a simple algorithm is established in this research. This procedure allows them to use the beam time for more treatments, quality assurances, and other research endeavors.
ERIC Educational Resources Information Center
Haans, Antal
2018-01-01
Contrast analysis is a relatively simple but effective statistical method for testing theoretical predictions about differences between group means against the empirical data. Despite its advantages, contrast analysis is hardly used to date, perhaps because it is not implemented in a convenient manner in many statistical software packages. This…
Postmodeling Sensitivity Analysis to Detect the Effect of Missing Data Mechanisms
ERIC Educational Resources Information Center
Jamshidian, Mortaza; Mata, Matthew
2008-01-01
Incomplete or missing data is a common problem in almost all areas of empirical research. It is well known that simple and ad hoc methods such as complete case analysis or mean imputation can lead to biased and/or inefficient estimates. The method of maximum likelihood works well; however, when the missing data mechanism is not one of missing…
Measuring water and sediment discharge from a road plot with a settling basin and tipping bucket
Thomas A. Black; Charles H. Luce
2013-01-01
A simple empirical method quantifies water and sediment production from a forest road surface, and is well suited for calibration and validation of road sediment models. To apply this quantitative method, the hydrologic technician installs bordered plots on existing typical road segments and measures coarse sediment production in a settling tank. When a tipping bucket...
ERIC Educational Resources Information Center
Stevens, Olinger; Leigh, Erika
2012-01-01
Scope and Method of Study: The purpose of the study is to use an empirical approach to identify a simple, economical, efficient, and technically adequate performance measure that teachers can use to assess student growth in mathematics. The current study has been designed to expand the body of research for math CBM to further examine technical…
Robert R. Ziemer
1979-01-01
For years, the principal objective of evapotranspiration research has been to calculate the loss of water under varying conditions of climate, soil, and vegetation. The early simple empirical methods have generally been replaced by more detailed models which more closely represent the physical and biological processes involved. Monteith's modification of the...
DOT National Transportation Integrated Search
1975-01-01
It has been recognized for many years that fatigue is one of many mechanisms by which asphaltic concrete pavements fail. Experience and empirical design procedures such as those developed by Marshall and Hveem have enabled engineers to design-mixture...
Yasaitis, Laura C; Arcaya, Mariana C; Subramanian, S V
2015-09-01
Creating local population health measures from administrative data would be useful for health policy and public health monitoring purposes. While a wide range of options--from simple spatial smoothers to model-based methods--for estimating such rates exists, there are relatively few side-by-side comparisons, especially not with real-world data. In this paper, we compare methods for creating local estimates of acute myocardial infarction rates from Medicare claims data. A Bayesian Monte Carlo Markov Chain estimator that incorporated spatial and local random effects performed best, followed by a method-of-moments spatial Empirical Bayes estimator. As the former is more complicated and time-consuming, spatial linear Empirical Bayes methods may represent a good alternative for non-specialist investigators. Copyright © 2015 Elsevier Ltd. All rights reserved.
Model of Pressure Distribution in Vortex Flow Controls
NASA Astrophysics Data System (ADS)
Mielczarek, Szymon; Sawicki, Jerzy M.
2015-06-01
Vortex valves belong to the category of hydrodynamic flow controls. They are important and theoretically interesting devices, so complex from hydraulic point of view, that probably for this reason none rational concept of their operation has been proposed so far. In consequence, functioning of vortex valves is described by CFD-methods (computer-aided simulation of technical objects) or by means of simple empirical relations (using discharge coefficient or hydraulic loss coefficient). Such rational model of the considered device is proposed in the paper. It has a simple algebraic form, but is well grounded physically. The basic quantitative relationship, which describes the valve operation, i.e. dependence between the flow discharge and the circumferential pressure head, caused by the rotation, has been verified empirically. Conformity between calculated and measured parameters of the device allows for acceptation of the proposed concept.
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
NASA Astrophysics Data System (ADS)
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2017-06-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
A Deterministic Annealing Approach to Clustering AIRS Data
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Language Switching in the Production of Phrases
ERIC Educational Resources Information Center
Tarlowski, Andrzej; Wodniecka, Zofia; Marzecova, Anna
2013-01-01
The language switching task has provided a useful insight into how bilinguals produce language. So far, however, the studies using this method have been limited to lexical access. The present study provides empirical evidence on language switching in the production of simple grammar structures. In the reported experiment, Polish-English unbalanced…
Measuring the Impact of Education on Productivity. Working Paper #261.
ERIC Educational Resources Information Center
Plant, Mark; Welch, Finis
A theoretical and conceptual analysis of techniques used to measure education's contribution to productivity is followed by a discussion of the empirical measures implemented by various researchers. Standard methods of growth accounting make sense for simple measurement of factor contributions where outputs are well measured and when factor growth…
An empirical Bayes approach to analyzing recurring animal surveys
Johnson, D.H.
1989-01-01
Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
NASA Astrophysics Data System (ADS)
Couture, A.; Casten, R. F.; Cakirli, R. B.
2017-12-01
Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.
Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.
King, Leandra; Wakeley, John
2016-09-01
We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.
A simple method for estimating gross carbon budgets for vegetation in forest ecosystems.
Ryan, Michael G.
1991-01-01
Gross carbon budgets for vegetation in forest ecosystems are difficult to construct because of problems in scaling flux measurements made on small samples over short periods of time and in determining belowground carbon allocation. Recently, empirical relationships have been developed to estimate total belowground carbon allocation from litterfall, and maintenance respiration from tissue nitrogen content. I outline a method for estimating gross carbon budgets using these empirical relationships together with data readily available from ecosystem studies (aboveground wood and canopy production, aboveground wood and canopy biomass, litterfall, and tissue nitrogen contents). Estimates generated with this method are compared with annual carbon fixation estimates from the Forest-BGC model for a lodgepole pine (Pinus contorta Dougl.) and a Pacific silver fir (Abies amabilis Dougl.) chronosequence.
Atlas of susceptibility to pollution in marinas. Application to the Spanish coast.
Gómez, Aina G; Ondiviela, Bárbara; Fernández, María; Juanes, José A
2017-01-15
An atlas of susceptibility to pollution of 320 Spanish marinas is provided. Susceptibility is assessed through a simple, fast and low cost empirical method estimating the flushing capacity of marinas. The Complexity Tidal Range Index (CTRI) was selected among eleven empirical methods. The CTRI method was selected by means of statistical analyses because: it contributes to explain the system's variance; it is highly correlated to numerical model results; and, it is sensitive to marinas' location and typology. The process of implementation to the Spanish coast confirmed its usefulness, versatility and adaptability as a tool for the environmental management of marinas worldwide. The atlas of susceptibility, assessed through CTRI values, is an appropriate instrument to prioritize environmental and planning strategies at a regional scale. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.
2012-08-01
A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process
Finley, Benjamin J.; Kilkki, Kalevi
2014-01-01
The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621
Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.
Finley, Benjamin J; Kilkki, Kalevi
2014-01-01
The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.
Cho, Il Haeng; Park, Kyung S; Lim, Chang Joo
2010-02-01
In this study, we described the characteristics of five different biological age (BA) estimation algorithms, including (i) multiple linear regression, (ii) principal component analysis, and somewhat unique methods developed by (iii) Hochschild, (iv) Klemera and Doubal, and (v) a variant of Klemera and Doubal's method. The objective of this study is to find the most appropriate method of BA estimation by examining the association between Work Ability Index (WAI) and the differences of each algorithm's estimates from chronological age (CA). The WAI was found to be a measure that reflects an individual's current health status rather than the deterioration caused by a serious dependency with the age. Experiments were conducted on 200 Korean male participants using a BA estimation system developed principally under the concept of non-invasive, simple to operate and human function-based. Using the empirical data, BA estimation as well as various analyses including correlation analysis and discriminant function analysis was performed. As a result, it had been confirmed by the empirical data that Klemera and Doubal's method with uncorrelated variables from principal component analysis produces relatively reliable and acceptable BA estimates. 2009 Elsevier Ireland Ltd. All rights reserved.
An alternative method for centrifugal compressor loading factor modelling
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Machine learning strategies for systems with invariance properties
NASA Astrophysics Data System (ADS)
Ling, Julia; Jones, Reese; Templeton, Jeremy
2016-08-01
In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.
Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation
NASA Technical Reports Server (NTRS)
He, Yuning; Lee, Herbert K. H.; Davies, Misty D.
2012-01-01
Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.
Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim
2013-05-01
The decision to pass or fail a medical student is a 'high stakes' one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the Regression Method, the Borderline Group Method, and the new Objective Borderline Method (OBM). Using Year 5 students' OSCE results from one medical school we established the pass/fail cut-off scores by the abovementioned three methods. The comparison indicated that the pass/fail cut-off scores generated by the OBM were similar to those generated by the more established methods (0.840 ≤ r ≤ 0.998; p < .0001). Based on theoretical and empirical analysis, we suggest that the OBM has advantages over existing methods in that it combines objectivity, realism, robust empirical basis and, no less importantly, is simple to use.
NASA Astrophysics Data System (ADS)
Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan
2014-07-01
Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
Estimation of vegetation cover at subpixel resolution using LANDSAT data
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1986-01-01
The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.
A simple method to incorporate water vapor absorption in the 15 microns remote temperature sounding
NASA Technical Reports Server (NTRS)
Dallu, G.; Prabhakara, C.; Conhath, B. J.
1975-01-01
The water vapor absorption in the 15 micron CO2 band, which can affect the remotely sensed temperatures near the surface, are estimated with the help of an empirical method. This method is based on the differential absorption properties of the water vapor in the 11-13 micron window region and does not require a detailed knowledge of the water vapor profile. With this approach Nimbus 4 IRIS radiance measurements are inverted to obtain temperature profiles. These calculated profiles agree with radiosonde data within about 2 C.
2016-12-01
chosen rather than complex ones , and responds to the criticism of the DTA approach. Chapter IV provides three separate case studies in defense R&D...defense R&D projects. To this end, the first section describes the case study method and the advantages of using simple models over more complex ones ...the analysis lacked empirical data and relied on subjective data, the analysis successfully combined the DTA approach with the case study method and
A framework for studying transient dynamics of population projection matrix models.
Stott, Iain; Townley, Stuart; Hodgson, David James
2011-09-01
Empirical models are central to effective conservation and population management, and should be predictive of real-world dynamics. Available modelling methods are diverse, but analysis usually focuses on long-term dynamics that are unable to describe the complicated short-term time series that can arise even from simple models following ecological disturbances or perturbations. Recent interest in such transient dynamics has led to diverse methodologies for their quantification in density-independent, time-invariant population projection matrix (PPM) models, but the fragmented nature of this literature has stifled the widespread analysis of transients. We review the literature on transient analyses of linear PPM models and synthesise a coherent framework. We promote the use of standardised indices, and categorise indices according to their focus on either convergence times or transient population density, and on either transient bounds or case-specific transient dynamics. We use a large database of empirical PPM models to explore relationships between indices of transient dynamics. This analysis promotes the use of population inertia as a simple, versatile and informative predictor of transient population density, but criticises the utility of established indices of convergence times. Our findings should guide further development of analyses of transient population dynamics using PPMs or other empirical modelling techniques. © 2011 Blackwell Publishing Ltd/CNRS.
New Approaches in Force-Limited Vibration Testing of Flight Hardware
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Kern, Dennis L.
2012-01-01
To qualify flight hardware for random vibration environments the following methods are used to limit the loads in the aerospace industry: (1) Response limiting and notching (2) Simple TDOF model (3) Semi-empirical force limits (4) Apparent mass, etc. and (5) Impedance method. In all these methods attempts are made to remove conservatism due to the mismatch in impedances between the test and the flight configurations of the hardware that are being qualified. Assumption is the hardware interfaces have correlated responses. A new method that takes into account the un-correlated hardware interface responses are described in this presentation.
A Method for Automated Detection of Usability Problems from Client User Interface Events
Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.
2005-01-01
Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121
On the predictability of land surface fluxes from meteorological variables
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.
2018-01-01
Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.
NASA Technical Reports Server (NTRS)
Lamers, H. J. G. L. M.; Gathier, R.; Snow, T. P.
1980-01-01
From a study of the UV lines in the spectra of 25 stars from 04 to B1, the empirical relations between the mean density in the wind and the ionization fractions of O VI, N V, Si IV, and the excited C III (2p 3P0) level were derived. Using these empirical relations, a simple relation was derived between the mass-loss rate and the column density of any of these four ions. This relation can be used for a simple determination of the mass-loss rate from O4 to B1 stars.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
An Empirical Method for Determining 234U Percentage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miko, David K.
2015-11-02
When isotopic information for uranium is provided, the concentration of 234U is frequently neglected. Often the isotopic content is given as a percentage of 235U with the assumption that the remainder consists of 238U. In certain applications, such as heat output, the concentration of 234U can be a significant contributing factor. For situations where only the 235U and 238U values are given, a simple way to calculate the 234U component would be beneficial. The approach taken here is empirical. A series of uranium standards with varying enrichments were analyzed. The 234U and 235U data were fit using a second ordermore » polynomial.« less
Visual conspicuity: a new simple standard, its reliability, validity and applicability.
Wertheim, A H
2010-03-01
A general standard for quantifying conspicuity is described. It derives from a simple and easy method to quantitatively measure the visual conspicuity of an object. The method stems from the theoretical view that the conspicuity of an object is not a property of that object, but describes the degree to which the object is perceptually embedded in, i.e. laterally masked by, its visual environment. First, three variations of a simple method to measure the strength of such lateral masking are described and empirical evidence for its reliability and its validity is presented, as are several tests of predictions concerning the effects of viewing distance and ambient light. It is then shown how this method yields a conspicuity standard, expressed as a number, which can be made part of a rule of law, and which can be used to test whether or not, and to what extent, the conspicuity of a particular object, e.g. a traffic sign, meets a predetermined criterion. An additional feature is that, when used under different ambient light conditions, the method may also yield an index of the amount of visual clutter in the environment. Taken together the evidence illustrates the methods' applicability in both the laboratory and in real-life situations. STATEMENT OF RELEVANCE: This paper concerns a proposal for a new method to measure visual conspicuity, yielding a numerical index that can be used in a rule of law. It is of importance to ergonomists and human factor specialists who are asked to measure the conspicuity of an object, such as a traffic or rail-road sign, or any other object. The new method is simple and circumvents the need to perform elaborate (search) experiments and thus has great relevance as a simple tool for applied research.
A method of predicting the energy-absorption capability of composite subfloor beams
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1987-01-01
A simple method of predicting the energy-absorption capability of composite subfloor beam structure was developed. The method is based upon the weighted sum of the energy-absorption capability of constituent elements of a subfloor beam. An empirical data base of energy absorption results from circular and square cross section tube specimens were used in the prediction capability. The procedure is applicable to a wide range of subfloor beam structure. The procedure was demonstrated on three subfloor beam concepts. Agreement between test and prediction was within seven percent for all three cases.
Defying Intuition: Demonstrating the Importance of the Empirical Technique.
ERIC Educational Resources Information Center
Kohn, Art
1992-01-01
Describes a classroom activity featuring a simple stay-switch probability game. Contends that the exercise helps students see the importance of empirically validating beliefs. Includes full instructions for conducting and discussing the exercise. (CFR)
Updates on Force Limiting Improvements
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Scharton, Terry
2013-01-01
The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.
An empirical approach to improving tidal predictions using recent real-time tide gauge data
NASA Astrophysics Data System (ADS)
Hibbert, Angela; Royston, Samantha; Horsburgh, Kevin J.; Leach, Harry
2014-05-01
Classical harmonic methods of tidal prediction are often problematic in estuarine environments due to the distortion of tidal fluctuations in shallow water, which results in a disparity between predicted and observed sea levels. This is of particular concern in the Bristol Channel, where the error associated with tidal predictions is potentially greater due to an unusually large tidal range of around 12m. As such predictions are fundamental to the short-term forecasting of High Water (HW) extremes, it is vital that alternative solutions are found. In a pilot study, using a year-long observational sea level record from the Port of Avonmouth in the Bristol Channel, the UK National Tidal and Sea Level Facility (NTSLF) tested the potential for reducing tidal prediction errors, using three alternatives to the Harmonic Method of tidal prediction. The three methods evaluated were (1) the use of Artificial Neural Network (ANN) models, (2) the Species Concordance technique and (3) a simple empirical procedure for correcting Harmonic Method High Water predictions based upon a few recent observations (referred to as the Empirical Correction Method). This latter method was then successfully applied to sea level records from an additional 42 of the 45 tide gauges that comprise the UK Tide Gauge Network. Consequently, it is to be incorporated into the operational systems of the UK Coastal Monitoring and Forecasting Partnership in order to improve short-term sea level predictions for the UK and in particular, the accurate estimation of HW extremes.
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Selecting a restoration technique to minimize OCR error.
Cannon, M; Fugate, M; Hush, D R; Scovel, C
2003-01-01
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.
Thermophysical properties of simple liquid metals: A brief review of theory
NASA Technical Reports Server (NTRS)
Stroud, David
1993-01-01
In this paper, we review the current theory of the thermophysical properties of simple liquid metals. The emphasis is on thermodynamic properties, but we also briefly discuss the nonequilibrium properties of liquid metals. We begin by defining a 'simple liquid metal' as one in which the valence electrons interact only weakly with the ionic cores, so that the interaction can be treated by perturbation theory. We then write down the equilibrium Hamiltonian of a liquid metal as a sum of five terms: the bare ion-ion interaction, the electron-electron interaction, the bare electron-ion interaction, and the kinetic energies of electrons and ions. Since the electron-ion interaction can be treated by perturbation, the electronic part contributes in two ways to the Helmholtz free energy: it gives a density-dependent term which is independent of the arrangement of ions, and it acts to screen the ion-ion interaction, giving rise to effective ion-ion pair potentials which are density-dependent, in general. After sketching the form of a typical pair potential, we briefly enumerate some methods for calculating the ionic distribution function and hence the Helmholtz free energy of the liquid: monte Carlo simulations, molecular dynamics simulations, and thermodynamic perturbation theory. The final result is a general expression for the Helmholtz free energy of the liquid metal. It can be used to calculate a wide range of thermodynamic properties of simple metal liquids, which we enumerate. They include not only a range of thermodynamic coefficients of both metals and alloys, but also many aspects of the phase diagram, including freezing curves of pure elements and phase diagrams of liquid alloys (including liquidus and solidus curves). We briefly mention some key discoveries resulting from previous applications of this method, and point out that the same methods work for other materials not normally considered to be liquid metals (such as colloidal suspensions, in which the suspended microspheres behave like ions screened by the salt solution in which they are suspended). We conclude with a brief discussion of some non-equilibrium (i.e., transport) properties which can be treated by an extension of these methods. These include electrical resistivity, thermal conductivity, viscosity, atomic self-diffusion coefficients, concentration diffusion coefficients in alloys, surface tension and thermal emissivity. Finally, we briefly mention two methods by which the theory might be extended to non-simple liquid metals: these are empirical techniques (i.e., empirical two- and three-body potentials), and numerical many-body approaches. Both may be potentially applicable to extremely complex systems, such as nonstoichiometric liquid semiconductor alloys.
Sepulveda, Adam; Lowe, Winsor H.; Marra, Peter P.
2012-01-01
5. Although we did not identify mechanisms that facilitate salamander and fish coexistence, our empirical data and use of novel approaches to describe the trophic niche did yield important insights on the role of predator–prey interactions and cannibalism as alternative coexistence mechanisms. In addition, we found that 95% kernel estimators are a simple and robust method to describe population-level measure of trophic structure.
DOT National Transportation Integrated Search
1979-01-01
Presented is a relatively simple empirical equation that reasonably approximates the relationship between mesoscale carbon monoxide (CO) concentrations, areal vehicular CO emission rates, and the meteorological factors of wind speed and mixing height...
Shear velocity criterion for incipient motion of sediment
Simoes, Francisco J.
2014-01-01
The prediction of incipient motion has had great importance to the theory of sediment transport. The most commonly used methods are based on the concept of critical shear stress and employ an approach similar, or identical, to the Shields diagram. An alternative method that uses the movability number, defined as the ratio of the shear velocity to the particle’s settling velocity, was employed in this study. A large amount of experimental data were used to develop an empirical incipient motion criterion based on the movability number. It is shown that this approach can provide a simple and accurate method of computing the threshold condition for sediment motion.
NASA Astrophysics Data System (ADS)
Youn, Younghan; Koo, Jeong-Seo
The complete evaluation of the side vehicle structure and the occupant protection is only possible by means of the full scale side impact crash test. But, auto part manufacturers such as door trim makers can not conduct the test especially when the vehicle is under the developing process. The main objective of this study is to obtain the design guidelines by a simple component level impact test. The relationship between the target absorption energy and impactor speed were examined using the energy absorbed by the door trim. Since each different vehicle type required different energy levels on the door trim. A simple impact test method was developed to estimate abdominal injury by measuring reaction force of the impactor. The reaction force will be converted to a certain level of the energy by the proposed formula. The target of absorption energy for door trim only and the impact speed of simple impactor are derived theoretically based on the conservation of energy. With calculated speed of dummy and the effective mass of abdomen, the energy allocated in the abdomen area of door trim was calculated. The impactor speed can be calculated based on the equivalent energy of door trim absorbed during the full crash test. With the proposed design procedure for the door trim by a simple impact test method was demonstrated to evaluate the abdominal injury. This paper describes a study that was conducted to determine sensitivity of several design factors for reducing abdominal injury values using the matrix of orthogonal array method. In conclusion, with theoretical considerations and empirical test data, the main objective, standardization of door trim design using the simple impact test method was established.
Dynamic properties of composite cemented clay.
Cai, Yuan-Qiang; Liang, Xu
2004-03-01
In this work, the dynamic properties of composite cemented clay under a wide range of strains were studied considering the effect of different mixing ratio and the change of confining pressures through dynamic triaxial test. A simple and practical method to estimate the dynamic elastic modulus and damping ratio is proposed in this paper and a related empirical normalized formula is also presented. The results provide useful guidelines for preliminary estimation of cement requirements to improve the dynamic properties of clays.
NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
Machine learning strategies for systems with invariance properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan
Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less
Machine learning strategies for systems with invariance properties
Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan
2016-05-06
Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.
Quantitative genetic versions of Hamilton's rule with empirical applications
McGlothlin, Joel W.; Wolf, Jason B.; Brodie, Edmund D.; Moore, Allen J.
2014-01-01
Hamilton's theory of inclusive fitness revolutionized our understanding of the evolution of social interactions. Surprisingly, an incorporation of Hamilton's perspective into the quantitative genetic theory of phenotypic evolution has been slow, despite the popularity of quantitative genetics in evolutionary studies. Here, we discuss several versions of Hamilton's rule for social evolution from a quantitative genetic perspective, emphasizing its utility in empirical applications. Although evolutionary quantitative genetics offers methods to measure each of the critical parameters of Hamilton's rule, empirical work has lagged behind theory. In particular, we lack studies of selection on altruistic traits in the wild. Fitness costs and benefits of altruism can be estimated using a simple extension of phenotypic selection analysis that incorporates the traits of social interactants. We also discuss the importance of considering the genetic influence of the social environment, or indirect genetic effects (IGEs), in the context of Hamilton's rule. Research in social evolution has generated an extensive body of empirical work focusing—with good reason—almost solely on relatedness. We argue that quantifying the roles of social and non-social components of selection and IGEs, in addition to relatedness, is now timely and should provide unique additional insights into social evolution. PMID:24686930
Application of empirical and dynamical closure methods to simple climate models
NASA Astrophysics Data System (ADS)
Padilla, Lauren Elizabeth
This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.
DFLOWZ: A free program to evaluate the area potentially inundated by a debris flow
NASA Astrophysics Data System (ADS)
Berti, M.; Simoni, A.
2014-06-01
The transport and deposition mechanisms of debris flows are still poorly understood due to the complexity of the interactions governing the behavior of water-sediment mixtures. Empirical-statistical methods can therefore be used, instead of more sophisticated numerical methods, to predict the depositional behavior of these highly dangerous gravitational movements. We use widely accepted semi-empirical scaling relations and propose an automated procedure (DFLOWZ) to estimate the area potentially inundated by a debris flow event. Beside a digital elevation model (DEM), the procedure has only two input requirements: the debris flow volume and the possible flow-path. The procedure is implemented in Matlab and a Graphical User Interface helps to visualize initial conditions, flow propagation and final results. Different hypothesis about the depositional behavior of an event can be tested together with the possible effect of simple remedial measures. Uncertainties associated to scaling relations can be treated and their impact on results evaluated. Our freeware application aims to facilitate and speed up the process of susceptibility mapping. We discuss limits and advantages of the method in order to inform inexperienced users.
An Empirical Non-TNT Approach to Launch Vehicle Explosion Modeling
NASA Technical Reports Server (NTRS)
Blackwood, James M.; Skinner, Troy; Richardson, Erin H.; Bangham, Michal E.
2015-01-01
In an effort to increase crew survivability from catastrophic explosions of Launch Vehicles (LV), a study was conducted to determine the best method for predicting LV explosion environments in the near field. After reviewing such methods as TNT equivalence, Vapor Cloud Explosion (VCE) theory, and Computational Fluid Dynamics (CFD), it was determined that the best approach for this study was to assemble all available empirical data from full scale launch vehicle explosion tests and accidents. Approximately 25 accidents or full-scale tests were found that had some amount of measured blast wave, thermal, or fragment explosion environment characteristics. Blast wave overpressure was found to be much lower in the near field than predicted by most TNT equivalence methods. Additionally, fragments tended to be larger, fewer, and slower than expected if the driving force was from a high explosive type event. In light of these discoveries, a simple model for cryogenic rocket explosions is presented. Predictions from this model encompass all known applicable full scale launch vehicle explosion data. Finally, a brief description of on-going analysis and testing to further refine the launch vehicle explosion environment is discussed.
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.
Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N
2017-04-01
There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.
An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.
1994-01-01
Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.
Visualising higher order Brillouin zones with applications
NASA Astrophysics Data System (ADS)
Andrew, R. C.; Salagaram, T.; Chetty, N.
2017-05-01
A key concept in material science is the relationship between the Bravais lattice, the reciprocal lattice and the resulting Brillouin zones (BZ). These zones are often complicated shapes that are hard to construct and visualise without the use of sophisticated software, even by professional scientists. We have used a simple sorting algorithm to construct BZ of any order for a chosen Bravais lattice that is easy to implement in any scientific programming language. The resulting zones can then be visualised using freely available plotting software. This method has pedagogical value for upper-level undergraduate students since, along with other computational methods, it can be used to illustrate how constant-energy surfaces combine with these zones to create van Hove singularities in the density of states. In this paper we apply our algorithm along with the empirical pseudopotential method and the 2D equivalent of the tetrahedron method to show how they can be used in a simple software project to investigate this interaction for a 2D crystal. This project not only enhances students’ fundamental understanding of the principles involved but also improves transferable coding skills.
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2013-01-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K). PMID:25685493
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2014-03-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).
Using Paperclips to Explain Empirical Formulas to Students
ERIC Educational Resources Information Center
Nassiff, Peter; Czerwinski, Wendy A.
2014-01-01
Early in their chemistry education, students learn to do empirical formula calculations by rote without an understanding of the historical context behind them or the reason why their calculations work. In these activities, students use paperclip "atoms", construct a series of simple compounds representing real molecules, and discover,…
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
Helicopter rotor and engine sizing for preliminary performance estimation
NASA Technical Reports Server (NTRS)
Talbot, P. D.; Bowles, J. V.; Lee, H. C.
1986-01-01
Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.
Xu, Y.; Xia, J.; Miller, R.D.
2006-01-01
Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.
Discriminative components of data.
Peltonen, Jaakko; Kaski, Samuel
2005-01-01
A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.
Using XCO2 retrievals for assessing the long-term consistency of NDACC/FTIR data sets
NASA Astrophysics Data System (ADS)
Barthlott, S.; Schneider, M.; Hase, F.; Wiegele, A.; Christner, E.; González, Y.; Blumenstock, T.; Dohe, S.; García, O. E.; Sepúlveda, E.; Strong, K.; Mendonca, J.; Weaver, D.; Palm, M.; Deutscher, N. M.; Warneke, T.; Notholt, J.; Lejeune, B.; Mahieu, E.; Jones, N.; Griffith, D. W. T.; Velazco, V. A.; Smale, D.; Robinson, J.; Kivi, R.; Heikkinen, P.; Raffalski, U.
2014-10-01
Within the NDACC (Network for the Detection of Atmospheric Composition Change), more than 20 FTIR (Fourier-Transform InfraRed) spectrometers, spread worldwide, provide long-term data records of many atmospheric trace gases. We present a method that uses measured and modelled XCO2 for assessing the consistency of these data records. Our NDACC XCO2 retrieval setup is kept simple so that it can easily be adopted for any NDACC/FTIR-like measurement made since the late 1950s. By a comparison to coincident TCCON (Total Carbon Column Observing Network) measurements, we empirically demonstrate the useful quality of this NDACC XCO2 product (empirically obtained scatter between TCCON and NDACC is about 4‰ for daily mean as well as monthly mean comparisons and the bias is 25‰). As XCO2 model we developed and used a simple regression model fitted to CarbonTracker results and the Mauna Loa CO2 in-situ records. A comparison to TCCON data suggests an uncertainty of the model for monthly mean data of below 3‰. We apply the method to the NDACC/FTIR spectra that are used within the project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water) and demonstrate that there is a good consistency for these globally representative set of spectra measured since 1996: the scatter between the modelled and measured XCO2 on a yearly time scale is only 3‰.
Using XCO2 retrievals for assessing the long-term consistency of NDACC/FTIR data sets
NASA Astrophysics Data System (ADS)
Barthlott, S.; Schneider, M.; Hase, F.; Wiegele, A.; Christner, E.; González, Y.; Blumenstock, T.; Dohe, S.; García, O. E.; Sepúlveda, E.; Strong, K.; Mendonca, J.; Weaver, D.; Palm, M.; Deutscher, N. M.; Warneke, T.; Notholt, J.; Lejeune, B.; Mahieu, E.; Jones, N.; Griffith, D. W. T.; Velazco, V. A.; Smale, D.; Robinson, J.; Kivi, R.; Heikkinen, P.; Raffalski, U.
2015-03-01
Within the NDACC (Network for the Detection of Atmospheric Composition Change), more than 20 FTIR (Fourier-transform infrared) spectrometers, spread worldwide, provide long-term data records of many atmospheric trace gases. We present a method that uses measured and modelled XCO2 for assessing the consistency of these NDACC data records. Our XCO2 retrieval setup is kept simple so that it can easily be adopted for any NDACC/FTIR-like measurement made since the late 1950s. By a comparison to coincident TCCON (Total Carbon Column Observing Network) measurements, we empirically demonstrate the useful quality of this suggested NDACC XCO2 product (empirically obtained scatter between TCCON and NDACC is about 4‰ for daily mean as well as monthly mean comparisons, and the bias is 25‰). Our XCO2 model is a simple regression model fitted to CarbonTracker results and the Mauna Loa CO2 in situ records. A comparison to TCCON data suggests an uncertainty of the model for monthly mean data of below 3‰. We apply the method to the NDACC/FTIR spectra that are used within the project MUSICA (multi-platform remote sensing of isotopologues for investigating the cycle of atmospheric water) and demonstrate that there is a good consistency for these globally representative set of spectra measured since 1996: the scatter between the modelled and measured XCO2 on a yearly time scale is only 3‰.
Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark
2016-01-01
Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.
The Empirical Attitude, Material Practice and Design Activities
ERIC Educational Resources Information Center
Apedoe, Xornam; Ford, Michael
2010-01-01
This article is an argument about something that is both important and severely underemphasized in most current science curricula. The empirical attitude, fundamental to science since Galileo, is a habit of mind that motivates an active search for feedback on our ideas from the material world. Although more simple views of science manifest the…
Overview and extensions of a system for routing directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1988-01-01
Many problems can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from adjacent vertices. A method is given for parallelizing such problems on an SIMD machine model that uses only nearest neighbor connections for communication, and has no facility for local indirect addressing. Each vertex of the graph will be assigned to a processor in the machine. Rules for a labeling are introduced that support the use of a simple algorithm for movement of data along the edges of the graph. Additional algorithms are defined for addition and deletion of edges. Modifying or adding a new edge takes the same time as parallel traversal. This combination of architecture and algorithms defines a system that is relatively simple to build and can do fast graph processing. All edges can be traversed in parallel in time O(T), where T is empirically proportional to the average path length in the embedding times the average degree of the graph. Additionally, researchers present an extension to the above method which allows for enhanced performance by allowing some broadcasting capabilities.
Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes
NASA Astrophysics Data System (ADS)
Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.
2018-03-01
A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mencuccini, Maurizio; Salmon, Yann; Mitchell, Patrick
Substantial uncertainty surrounds our knowledge of tree stem growth, with some of the most basic questions, such as when stem radial growth occurs through the daily cycle, still unanswered. Here, we employed high-resolution point dendrometers, sap flow sensors, and developed theory and statistical approaches, to devise a novel method separating irreversible radial growth from elastic tension-driven and elastic osmotically driven changes in bark water content. We tested this method using data from five case study species. Experimental manipulations, namely a field irrigation experiment on Scots pine and a stem girdling experiment on red forest gum trees, were used to validatemore » the theory. Time courses of stem radial growth following irrigation and stem girdling were consistent with a-priori predictions. Patterns of stem radial growth varied across case studies, with growth occurring during the day and/or night, consistent with the available literature. Importantly, our approach provides a valuable alternative to existing methods, as it can be approximated by a simple empirical interpolation routine that derives irreversible radial growth using standard regression techniques. In conclusion, our novel method provides an improved understanding of the relative source–sink carbon dynamics of tree stems at a sub-daily time scale.« less
Mencuccini, Maurizio; Salmon, Yann; Mitchell, Patrick; Hölttä, Teemu; Choat, Brendan; Meir, Patrick; O'Grady, Anthony; Tissue, David; Zweifel, Roman; Sevanto, Sanna; Pfautsch, Sebastian
2017-02-01
Substantial uncertainty surrounds our knowledge of tree stem growth, with some of the most basic questions, such as when stem radial growth occurs through the daily cycle, still unanswered. We employed high-resolution point dendrometers, sap flow sensors, and developed theory and statistical approaches, to devise a novel method separating irreversible radial growth from elastic tension-driven and elastic osmotically driven changes in bark water content. We tested this method using data from five case study species. Experimental manipulations, namely a field irrigation experiment on Scots pine and a stem girdling experiment on red forest gum trees, were used to validate the theory. Time courses of stem radial growth following irrigation and stem girdling were consistent with a-priori predictions. Patterns of stem radial growth varied across case studies, with growth occurring during the day and/or night, consistent with the available literature. Importantly, our approach provides a valuable alternative to existing methods, as it can be approximated by a simple empirical interpolation routine that derives irreversible radial growth using standard regression techniques. Our novel method provides an improved understanding of the relative source-sink carbon dynamics of tree stems at a sub-daily time scale. © 2016 The Authors Plant, Cell & Environment Published by John Wiley & Sons Ltd.
Mencuccini, Maurizio; Salmon, Yann; Mitchell, Patrick; ...
2017-11-12
Substantial uncertainty surrounds our knowledge of tree stem growth, with some of the most basic questions, such as when stem radial growth occurs through the daily cycle, still unanswered. Here, we employed high-resolution point dendrometers, sap flow sensors, and developed theory and statistical approaches, to devise a novel method separating irreversible radial growth from elastic tension-driven and elastic osmotically driven changes in bark water content. We tested this method using data from five case study species. Experimental manipulations, namely a field irrigation experiment on Scots pine and a stem girdling experiment on red forest gum trees, were used to validatemore » the theory. Time courses of stem radial growth following irrigation and stem girdling were consistent with a-priori predictions. Patterns of stem radial growth varied across case studies, with growth occurring during the day and/or night, consistent with the available literature. Importantly, our approach provides a valuable alternative to existing methods, as it can be approximated by a simple empirical interpolation routine that derives irreversible radial growth using standard regression techniques. In conclusion, our novel method provides an improved understanding of the relative source–sink carbon dynamics of tree stems at a sub-daily time scale.« less
NASA Astrophysics Data System (ADS)
Mikeš, Daniel
2010-05-01
Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.
Principal Effects of Axial Load on Moment-Distribution Analysis of Rigid Structures
NASA Technical Reports Server (NTRS)
James, Benjamin Wylie
1935-01-01
This thesis presents the method of moment distribution modified to include the effect of axial load upon the bending moments. This modification makes it possible to analyze accurately complex structures, such as rigid fuselage trusses, that heretofore had to be analyzed by approximate formulas and empirical rules. The method is simple enough to be practicable even for complex structures, and it gives a means of analysis for continuous beams that is simpler than the extended three-moment equation now in common use. When the effect of axial load is included, it is found that the basic principles of moment distribution remain unchanged, the only difference being that the factors used, instead of being constants for a given member, become functions of the axial load. Formulas have been developed for these factors, and curves plotted so that their applications requires no more work than moment distribution without axial load. Simple problems have been included to illustrate the use of the curves.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-01-01
Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393
Children's Criteria for Representational Adequacy in the Perception of Simple Sonic Stimuli
ERIC Educational Resources Information Center
Verschaffel, Lieven; Reybrouck, Mark; Jans, Christine; Van Dooren, Wim
2010-01-01
This study investigates children's metarepresentational competence with regard to listening to and making sense of simple sonic stimuli. Using diSessa's (2003) work on metarepresentational competence in mathematics and sciences as theoretical and empirical background, it aims to assess children's criteria for representational adequacy of graphical…
SLIC superpixels compared to state-of-the-art superpixel methods.
Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine
2012-11-01
Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach
Chadha, V. K.; Laxminarayan, R.; Arinaminpathy, N.
2017-01-01
SUMMARY BACKGROUND: There is an urgent need for improved estimations of the burden of tuberculosis (TB). OBJECTIVE: To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. DESIGN: We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. RESULTS: Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8–156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. CONCLUSIONS: Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used. PMID:28284250
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1994-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1991-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.
NASA Astrophysics Data System (ADS)
O'Neill, N. T.
2010-10-01
It is pointed out that the graphical, aerosol classification method of Gobbi et al. (2007) can be interpreted as a manifestation of fundamental analytical relations whose existance depends on the simple assumption that the optical effects of aerosols are essentially bimodal in nature. The families of contour lines in their "Ada" curvature space are essentially empirical and discretized illustrations of analytical parabolic forms in (α, α') space (the space formed by the continuously differentiable Angstrom exponent and its spectral derivative).
On laminar and turbulent friction
NASA Technical Reports Server (NTRS)
Von Karman, TH
1946-01-01
Report deals, first with the theory of the laminar friction flow, where the basic concepts of Prandtl's boundary layer theory are represented from mathematical and physical points of view, and a method is indicated by means of which even more complicated cases can be treated with simple mathematical means, at least approximately. An attempt is also made to secure a basis for the computation of the turbulent friction by means of formulas through which the empirical laws of the turbulent pipe resistance can be applied to other problems on friction drag. (author)
Prentice, Ross L; Zhao, Shanshan
2018-01-01
The Dabrowska (Ann Stat 16:1475-1489, 1988) product integral representation of the multivariate survivor function is extended, leading to a nonparametric survivor function estimator for an arbitrary number of failure time variates that has a simple recursive formula for its calculation. Empirical process methods are used to sketch proofs for this estimator's strong consistency and weak convergence properties. Summary measures of pairwise and higher-order dependencies are also defined and nonparametrically estimated. Simulation evaluation is given for the special case of three failure time variates.
Graphical function mapping as a new way to explore cause-and-effect chains
Evans, Mary Anne
2016-01-01
Graphical function mapping provides a simple method for improving communication within interdisciplinary research teams and between scientists and nonscientists. This article introduces graphical function mapping using two examples and discusses its usefulness. Function mapping projects the outcome of one function into another to show the combined effect. Using this mathematical property in a simpler, even cartoon-like, graphical way allows the rapid combination of multiple information sources (models, empirical data, expert judgment, and guesses) in an intuitive visual to promote further discussion, scenario development, and clear communication.
Good Practices for Learning to Recognize Actions Using FV and VLAD.
Wu, Jianxin; Zhang, Yu; Lin, Weiyao
2016-12-01
High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.
An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu
We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe givesmore » a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.« less
Andromeda IV: A new local volume very metal-poor galaxy
NASA Astrophysics Data System (ADS)
Pustilnik, S. A.; Tepliakova, A. L.; Kniazev, A. Y.; Burenkov, A. N.
2008-06-01
And IV is a low surface brightness (LSB) dwarf galaxy at a distance of 6.1 Mpc, projecting close to M 31. In this paper the results of spectroscopy of the And IV two brightest HII regions with the SAO 6-m telescope (BTA) are presented. In spectra of both of them the faint line [OIII] λ4363 Å was detected and this allowed us to determine their O/H by the classical Te method. Their values for 12+log(O/H) are equal to 7.49±0.06 and 7.55±0.23, respectively. The comparison of the direct O/H calculations with the two most reliable semi-empirical and empirical methods shows the good consistency between these methods. For And IV absolute blue magnitude, MB = -12.6, our value for O/H corresponds to the ‘standard’ relation between O/H and LB for dwarf irregular galaxies (DIGs). And IV appears to be a new representative of the extremely metal-deficient gas-rich galaxies in the Local Volume. The very large range of M(HI) for LSB galaxies with close metallicities and luminosities indicates that simple models of LSBG chemical evolution are too limited to predict such striking diversity.
NASA Astrophysics Data System (ADS)
Zhang, Wei
2011-07-01
The longitudinal dispersion coefficient, DL, is a fundamental parameter of longitudinal solute transport models: the advection-dispersion (AD) model and various deadzone models. Since DL cannot be measured directly, and since its calibration using tracer test data is quite expensive and not always available, researchers have developed various methods, theoretical or empirical, for estimating DL by easier available cross-sectional hydraulic measurements (i.e., the transverse velocity profile, etc.). However, for known and unknown reasons, DL cannot be satisfactorily predicted using these theoretical/empirical formulae. Either there is very large prediction error for theoretical methods, or there is a lack of generality for the empirical formulae. Here, numerical experiments using Mike21, a software package that implements one of the most rigorous two-dimensional hydrodynamic and solute transport equations, for longitudinal solute transport in hypothetical streams, are presented. An analysis of the evolution of simulated solute clouds indicates that the two fundamental assumptions in Fischer's longitudinal transport analysis may be not reasonable. The transverse solute concentration distribution, and hence the longitudinal transport appears to be controlled by a dimensionless number ?, where Q is the average volumetric flowrate, Dt is a cross-sectional average transverse dispersion coefficient, and W is channel flow width. A simple empirical ? relationship may be established. Analysis and a revision of Fischer's theoretical formula suggest that ɛ influences the efficiency of transverse mixing and hence has restraining effect on longitudinal spreading. The findings presented here would improve and expand our understanding of longitudinal solute transport in open channel flow.
Estimating trends in the global mean temperature record
NASA Astrophysics Data System (ADS)
Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.
2017-06-01
Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the important characteristics of internal variability, can result in more accurate uncertainty statements about trends.
Linear dynamical modes as new variables for data-driven ENSO forecast
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen
2018-05-01
A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.
Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos
2013-08-14
This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.
Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos
2013-01-01
This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented. PMID:23948873
Bröder, A
2000-09-01
The boundedly rational 'Take-The-Best" heuristic (TTB) was proposed by G. Gigerenzer, U. Hoffrage, and H. Kleinbölting (1991) as a model of fast and frugal probabilistic inferences. Although the simple lexicographic rule proved to be successful in computer simulations, direct empirical demonstrations of its adequacy as a psychological model are lacking because of several methodical problems. In 4 experiments with a total of 210 participants, this question was addressed. Whereas Experiment 1 showed that TTB is not valid as a universal hypothesis about probabilistic inferences, up to 28% of participants in Experiment 2 and 53% of participants in Experiment 3 were classified as TTB users. Experiment 4 revealed that investment costs for information seem to be a relevant factor leading participants to switch to a noncompensatory TTB strategy. The observed individual differences in strategy use imply the recommendation of an idiographic approach to decision-making research.
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-08-01
The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).
Inferring the parameters of a Markov process from snapshots of the steady state
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Berg, Johannes
2018-02-01
We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.
The min-conflicts heuristic: Experimental and theoretical results
NASA Technical Reports Server (NTRS)
Minton, Steven; Philips, Andrew B.; Johnston, Mark D.; Laird, Philip
1991-01-01
This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment for the variables in a problem, the method operates by searching through the space of possible repairs. The search is guided by an ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. We demonstrate empirically that the method performs orders of magnitude better than traditional backtracking techniques on certain standard problems. For example, the one million queens problem can be solved rapidly using our approach. We also describe practical scheduling applications where the method has been successfully applied. A theoretical analysis is presented to explain why the method works so well on certain types of problems and to predict when it is likely to be most effective.
Detonation Product EOS Studies: Using ISLS to Refine Cheetah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaug, J M; Howard, W M; Fried, L E
2001-08-08
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition themore » kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Computational models are systematically improved with each addition of experimental data.« less
A vortex-filament and core model for wings with edge vortex separation
NASA Technical Reports Server (NTRS)
Pao, J. L.; Lan, C. E.
1982-01-01
A vortex filament-vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separation was developed. Semi-empirical but simple methods were used to determine the initial positions of the free sheet and vortex core. Comparison with available data indicates that: (1) the present method is generally accurate in predicting the lift and induced drag coefficients but the predicted pitching moment is too positive; (2) the spanwise lifting pressure distributions estimated by the one vortex core solution of the present method are significantly better than the results of Mehrotra's method relative to the pressure peak values for the flat delta; (3) the two vortex core system applied to the double delta and strake wings produce overall aerodynamic characteristics which have good agreement with data except for the pitching moment; and (4) the computer time for the present method is about two thirds of that of Mehrotra's method.
The effects of time-varying observation errors on semi-empirical sea-level projections
Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.; ...
2016-11-30
Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less
The effects of time-varying observation errors on semi-empirical sea-level projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.
Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less
An investigation into exoplanet transits and uncertainties
NASA Astrophysics Data System (ADS)
Ji, Y.; Banks, T.; Budding, E.; Rhodes, M. D.
2017-06-01
A simple transit model is described along with tests of this model against published results for 4 exoplanet systems (Kepler-1, 2, 8, and 77). Data from the Kepler mission are used. The Markov Chain Monte Carlo (MCMC) method is applied to obtain realistic error estimates. Optimisation of limb darkening coefficients is subject to data quality. It is more likely for MCMC to derive an empirical limb darkening coefficient for light curves with S/N (signal to noise) above 15. Finally, the model is applied to Kepler data for 4 Kepler candidate systems (KOI 760.01, 767.01, 802.01, and 824.01) with previously unpublished results. Error estimates for these systems are obtained via the MCMC method.
On determining absolute entropy without quantum theory or the third law of thermodynamics
NASA Astrophysics Data System (ADS)
Steane, Andrew M.
2016-04-01
We employ classical thermodynamics to gain information about absolute entropy, without recourse to statistical methods, quantum mechanics or the third law of thermodynamics. The Gibbs-Duhem equation yields various simple methods to determine the absolute entropy of a fluid. We also study the entropy of an ideal gas and the ionization of a plasma in thermal equilibrium. A single measurement of the degree of ionization can be used to determine an unknown constant in the entropy equation, and thus determine the absolute entropy of a gas. It follows from all these examples that the value of entropy at absolute zero temperature does not need to be assigned by postulate, but can be deduced empirically.
Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav
2005-09-01
A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.
Complex dynamics and empirical evidence (Invited Paper)
NASA Astrophysics Data System (ADS)
Delli Gatti, Domenico; Gaffeo, Edoardo; Giulioni, Gianfranco; Gallegati, Mauro; Kirman, Alan; Palestrini, Antonio; Russo, Alberto
2005-05-01
Standard macroeconomics, based on a reductionist approach centered on the representative agent, is badly equipped to explain the empirical evidence where heterogeneity and industrial dynamics are the rule. In this paper we show that a simple agent-based model of heterogeneous financially fragile agents is able to replicate a large number of scaling type stylized facts with a remarkable degree of statistical precision.
Empirical likelihood-based tests for stochastic ordering
BARMI, HAMMOU EL; MCKEAGUE, IAN W.
2013-01-01
This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142
A simple approximation for the current-voltage characteristics of high-power, relativistic diodes
Ekdahl, Carl
2016-06-10
A simple approximation for the current-voltage characteristics of a relativistic electron diode is presented. The approximation is accurate from non-relativistic through relativistic electron energies. Although it is empirically developed, it has many of the fundamental properties of the exact diode solutions. Lastly, the approximation is simple enough to be remembered and worked on almost any pocket calculator, so it has proven to be quite useful on the laboratory floor.
A Simple and Effective Program to Increase Faculty Knowledge of and Referrals to Counseling Centers
ERIC Educational Resources Information Center
Nolan, Susan A.; Pace, Kristi A.; Iannelli, Richard J.; Palma, Thomas V.; Pakalns, Gail P.
2006-01-01
The authors describe a simple, cost-effective, and empirically supported program to increase faculty referrals of students to counseling centers (CCs). Incoming faculty members at 3 universities received a mailing and personal telephone call from a CC staff member. Faculty assigned to the outreach program had greater knowledge of and rates of…
Selection biases in empirical p(z) methods for weak lensing
Gruen, D.; Brimioulle, F.
2017-02-23
To measure the mass of foreground objects with weak gravitational lensing, one needs to estimate the redshift distribution of lensed background sources. This is commonly done in an empirical fashion, i.e. with a reference sample of galaxies of known spectroscopic redshift, matched to the source population. In this paper, we develop a simple decision tree framework that, under the ideal conditions of a large, purely magnitude-limited reference sample, allows an unbiased recovery of the source redshift probability density function p(z), as a function of magnitude and colour. We use this framework to quantify biases in empirically estimated p(z) caused bymore » selection effects present in realistic reference and weak lensing source catalogues, namely (1) complex selection of reference objects by the targeting strategy and success rate of existing spectroscopic surveys and (2) selection of background sources by the success of object detection and shape measurement at low signal to noise. For intermediate-to-high redshift clusters, and for depths and filter combinations appropriate for ongoing lensing surveys, we find that (1) spectroscopic selection can cause biases above the 10 per cent level, which can be reduced to ≈5 per cent by optimal lensing weighting, while (2) selection effects in the shape catalogue bias mass estimates at or below the 2 per cent level. Finally, this illustrates the importance of completeness of the reference catalogues for empirical redshift estimation.« less
NASA Astrophysics Data System (ADS)
Lazzari, Maurizio; Danese, Maria; Gioia, Dario; Piccarreta, Marco
2013-04-01
Sedimentary budget estimation is an important topic for both scientific and social community, because it is crucial to understand both dynamics of orogenic belts and many practical problems, such as soil conservation and sediment accumulation in reservoir. Estimations of sediment yield or denudation rates in southern-central Italy are generally obtained by simple empirical relationships based on statistical regression between geomorphic parameters of the drainage network and the measured suspended sediment yield at the outlet of several drainage basins or through the use of models based on sediment delivery ratio or on soil loss equations. In this work, we perform a study of catchment dynamics and an estimation of sedimentary yield for several mountain catchments of the central-western sector of the Basilicata region, southern Italy. Sediment yield estimation has been obtained through both an indirect estimation of suspended sediment yield based on the Tu index (mean annual suspension sediment yield, Ciccacci et al., 1980) and the application of the Rusle (Renard et al., 1997) and the USPED (Mitasova et al., 1996) empirical methods. The preliminary results indicate a reliable difference between the RUSLE and USPED methods and the estimation based on the Tu index; a critical data analysis of results has been carried out considering also the present-day spatial distribution of erosion, transport and depositional processes in relation to the maps obtained from the application of those different empirical methods. The studied catchments drain an artificial reservoir (i.e. the Camastra dam), where a detailed evaluation of the amount of historical sediment storage has been collected. Sediment yield estimation obtained by means of the empirical methods have been compared and checked with historical data of sediment accumulation measured in the artificial reservoir of the Camastra dam. The validation of such estimations of sediment yield at the scale of large catchments using sediment storage in reservoirs provides a good opportunity: i) to test the reliability of the empirical methods used to estimate the sediment yield; ii) to investigate the catchment dynamics and its spatial and temporal evolution in terms of erosion, transport and deposition. References Ciccacci S., Fredi F., Lupia Palmieri E., Pugliese F., 1980. Contributo dell'analisi geomorfica quantitativa alla valutazione dell'entita dell'erosione nei bacini fluviali. Bollettino della Società Geologica Italiana 99: 455-516. Mitasova H, Hofierka J, Zlocha M, Iverson LR. 1996. Modeling topographic potential for erosion and deposition using GIS. International Journal of Geographical Information Systems 10: 629-641. Renard K.G., Foster G.R., Weesies G.A., McCool D.K., Yoder D.C., 1997. Predicting soil erosion by water: a guide to conservation planning with the Revised Universal Soil Loss Equation (RUSLE), USDA-ARS, Agricultural Handbook No. 703.
Development of a new family of normalized modulus reduction and material damping curves
NASA Astrophysics Data System (ADS)
Darendeli, Mehmet Baris
2001-12-01
As part of various research projects [including the SRS (Savannah River Site) Project AA891070, EPRI (Electric Power Research Institute) Project 3302, and ROSRINE (Resolution of Site Response Issues from the Northridge Earthquake) Project], numerous geotechnical sites were drilled and sampled. Intact soil samples over a depth range of several hundred meters were recovered from 20 of these sites. These soil samples were tested in the laboratory at The University of Texas at Austin (UTA) to characterize the materials dynamically. The presence of a database accumulated from testing these intact specimens motivated a re-evaluation of empirical curves employed in the state of practice. The weaknesses of empirical curves reported in the literature were identified and the necessity of developing an improved set of empirical curves was recognized. This study focused on developing the empirical framework that can be used to generate normalized modulus reduction and material damping curves. This framework is composed of simple equations, which incorporate the key parameters that control nonlinear soil behavior. The data collected over the past decade at The University of Texas at Austin are statistically analyzed using First-order, Second-moment Bayesian Method (FSBM). The effects of various parameters (such as confining pressure and soil plasticity) on dynamic soil properties are evaluated and quantified within this framework. One of the most important aspects of this study is estimating not only the mean values of the empirical curves but also estimating the uncertainty associated with these values. This study provides the opportunity to handle uncertainty in the empirical estimates of dynamic soil properties within the probabilistic seismic hazard analysis framework. A refinement in site-specific probabilistic seismic hazard assessment is expected to materialize in the near future by incorporating the results of this study into state of practice.
Martin, Guillaume; Chapuis, Elodie; Goudet, Jérôme
2008-01-01
Neutrality tests in quantitative genetics provide a statistical framework for the detection of selection on polygenic traits in wild populations. However, the existing method based on comparisons of divergence at neutral markers and quantitative traits (Qst–Fst) suffers from several limitations that hinder a clear interpretation of the results with typical empirical designs. In this article, we propose a multivariate extension of this neutrality test based on empirical estimates of the among-populations (D) and within-populations (G) covariance matrices by MANOVA. A simple pattern is expected under neutrality: D = 2Fst/(1 − Fst)G, so that neutrality implies both proportionality of the two matrices and a specific value of the proportionality coefficient. This pattern is tested using Flury's framework for matrix comparison [common principal-component (CPC) analysis], a well-known tool in G matrix evolution studies. We show the importance of using a Bartlett adjustment of the test for the small sample sizes typically found in empirical studies. We propose a dual test: (i) that the proportionality coefficient is not different from its neutral expectation [2Fst/(1 − Fst)] and (ii) that the MANOVA estimates of mean square matrices between and among populations are proportional. These two tests combined provide a more stringent test for neutrality than the classic Qst–Fst comparison and avoid several statistical problems. Extensive simulations of realistic empirical designs suggest that these tests correctly detect the expected pattern under neutrality and have enough power to efficiently detect mild to strong selection (homogeneous, heterogeneous, or mixed) when it is occurring on a set of traits. This method also provides a rigorous and quantitative framework for disentangling the effects of different selection regimes and of drift on the evolution of the G matrix. We discuss practical requirements for the proper application of our test in empirical studies and potential extensions. PMID:18245845
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro
2017-04-01
We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Protein Structure Prediction with Evolutionary Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.; Krasnogor, N.; Pelta, D.A.
1999-02-08
Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.
An Application of Epidemiological Modeling to Information Diffusion
NASA Astrophysics Data System (ADS)
McCormack, Robert; Salter, William
Messages often spread within a population through unofficial - particularly web-based - media. Such ideas have been termed "memes." To impede the flow of terrorist messages and to promote counter messages within a population, intelligence analysts must understand how messages spread. We used statistical language processing technologies to operationalize "memes" as latent topics in electronic text and applied epidemiological techniques to describe and analyze patterns of message propagation. We developed our methods and applied them to English-language newspapers and blogs in the Arab world. We found that a relatively simple epidemiological model can reproduce some dynamics of observed empirical relationships.
Detonation product EOS studies: Using ISLS to refine CHEETAH
NASA Astrophysics Data System (ADS)
Zaug, Joseph; Fried, Larry; Hansen, Donald
2001-06-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
Detonation Product EOS Studies: Using ISLS to Refine Cheetah
NASA Astrophysics Data System (ADS)
Zaug, J. M.; Howard, W. M.; Fried, L. E.; Hansen, D. W.
2002-07-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.
Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping
2016-01-01
Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.
A system for routing arbitrary directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1987-01-01
There are many problems which can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from connecting vertices. A method is given for parallelizing such problems on an SIMD machine model that is bit-serial and uses only nearest neighbor connections for communication. Each vertex of the graph will be assigned to a processor in the machine. Algorithms are given that will be used to implement movement of data along the arcs of the graph. This architecture and algorithms define a system that is relatively simple to build and can do graph processing. All arcs can be transversed in parallel in time O(T), where T is empirically proportional to the diameter of the interconnection network times the average degree of the graph. Modifying or adding a new arc takes the same time as parallel traversal.
NASA Technical Reports Server (NTRS)
Holmes, Thomas; Owe, Manfred; deJeu, Richard
2007-01-01
Two data sets of experimental field observations with a range of meteorological conditions are used to investigate the possibility of modeling near-surface soil temperature profiles in a bare soil. It is shown that commonly used heat flow methods that assume a constant ground heat flux can not be used to model the extreme variations in temperature that occur near the surface. This paper proposes a simple approach for modeling the surface soil temperature profiles from a single depth observation. This approach consists of two parts: 1) modeling an instantaneous ground flux profile based on net radiation and the ground heat flux at 5cm depth; 2) using this ground heat flux profile to extrapolate a single temperature observation to a continuous near surface temperature profile. The new model is validated with an independent data set from a different soil and under a range of meteorological conditions.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-02-01
A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Baird, M. E.; Walker, S. J.; Wallace, B. B.; Webster, I. T.; Parslow, J. S.
2003-03-01
A simple model of estuarine eutrophication is built on biomechanical (or mechanistic) descriptions of a number of the key ecological processes in estuaries. Mechanistically described processes include the nutrient uptake and light capture of planktonic and benthic autotrophs, and the encounter rates of planktonic predators and prey. Other more complex processes, such as sediment biogeochemistry, detrital processes and phosphate dynamics, are modelled using empirical descriptions from the Port Phillip Bay Environmental Study (PPBES) ecological model. A comparison is made between the mechanistically determined rates of ecological processes and the analogous empirically determined rates in the PPBES ecological model. The rates generally agree, with a few significant exceptions. Model simulations were run at a range of estuarine depths and nutrient loads, with outputs presented as the annually averaged biomass of autotrophs. The simulations followed a simple conceptual model of eutrophication, suggesting a simple biomechanical understanding of estuarine processes can provide a predictive tool for ecological processes in a wide range of estuarine ecosystems.
Cao, Zheng; Bowie, James U
2014-01-01
Equilibrium H/D fractionation factors have been extensively employed to qualitatively assess hydrogen bond strengths in protein structure, enzyme active sites, and DNA. It remains unclear how fractionation factors correlate with hydrogen bond free energies, however. Here we develop an empirical relationship between fractionation factors and free energy, allowing for the simple and quantitative measurement of hydrogen bond free energies. Applying our empirical relationship to prior fractionation factor studies in proteins, we find: [1] Within the folded state, backbone hydrogen bonds are only marginally stronger on average in α-helices compared to β-sheets by ∼0.2 kcal/mol. [2] Charge-stabilized hydrogen bonds are stronger than neutral hydrogen bonds by ∼2 kcal/mol on average, and can be as strong as –7 kcal/mol. [3] Changes in a few hydrogen bonds during an enzyme catalytic cycle can stabilize an intermediate state by –4.2 kcal/mol. [4] Backbone hydrogen bonds can make a large overall contribution to the energetics of conformational changes, possibly playing an important role in directing conformational changes. [5] Backbone hydrogen bonding becomes more uniform overall upon ligand binding, which may facilitate participation of the entire protein structure in events at the active site. Our energetic scale provides a simple method for further exploration of hydrogen bond free energies. PMID:24501090
Oakley, Brian B; Line, J Eric; Berrang, Mark E; Johnson, Jessica M; Buhr, R Jeff; Cox, Nelson A; Hiett, Kelli L; Seal, Bruce S
2012-02-01
Although Campylobacter is an important food-borne human pathogen, there remains a lack of molecular diagnostic assays that are simple to use, cost-effective, and provide rapid results in research, clinical, or regulatory laboratories. Of the numerous Campylobacter assays that do exist, to our knowledge none has been empirically tested for specificity using high-throughput sequencing. Here we demonstrate the power of next-generation sequencing to determine the specificity of a widely cited Campylobacter-specific polymerase chain reaction (PCR) assay and describe a rapid method for direct cell suspension PCR to quickly and easily screen samples for Campylobacter. We present a specific protocol which eliminates the need for time-consuming and expensive genomic DNA extractions and, using a high-processivity polymerase, demonstrate conclusive screening of samples in <1 h. Pyrosequencing results show the assay to be extremely (>99%) sensitive, and spike-back experiments demonstrated a detection threshold of <10(2) CFU mL(-1). Additionally, we present 2 newly designed broad-range bacterial primer sets targeting the 23S rRNA gene that have wide applicability as internal amplification controls. Empirical testing of putative taxon-specific assays using high-throughput sequencing is an important validation step that is now financially feasible for research, regulatory, or clinical applications. Published by Elsevier Inc.
Self-organizing biopsychosocial dynamics and the patient-healer relationship.
Pincus, David
2012-01-01
The patient-healer relationship has an increasing area of interest for complementary and alternative medicine (CAM) researchers. This focus on the interpersonal context of treatment is not surprising as dismantling studies, clinical trials and other linear research designs continually point toward the critical role of context and the broadband biopsychosocial nature of therapeutic responses to CAM. Unfortunately, the same traditional research models and methods that fail to find simple and specific treatment-outcome relations are similarly failing to find simple and specific mechanisms to explain how interpersonal processes influence patient outcomes. This paper presents an overview of some of the key models and methods from nonlinear dynamical systems that are better equipped for empirical testing of CAM outcomes on broadband biopsychosocial processes. Suggestions are made for CAM researchers to assist in modeling the interactions among key process dynamics interacting across biopsychosocial scales: empathy, intra-psychic conflict, physiological arousal, and leukocyte telomerase activity. Finally, some speculations are made regarding the possibility for deeper cross-scale information exchange involving quantum temporal nonlocality. Copyright © 2012 S. Karger AG, Basel.
Kinetics versus thermodynamics in materials modeling: The case of the di-vacancy in iron
NASA Astrophysics Data System (ADS)
Djurabekova, F.; Malerba, L.; Pasianot, R. C.; Olsson, P.; Nordlund, K.
2010-07-01
Monte Carlo models are widely used for the study of microstructural and microchemical evolution of materials under irradiation. However, they often link explicitly the relevant activation energies to the energy difference between local equilibrium states. We provide a simple example (di-vacancy migration in iron) in which a rigorous activation energy calculation, by means of both empirical interatomic potentials and density functional theory methods, clearly shows that such a link is not granted, revealing a migration mechanism that a thermodynamics-linked activation energy model cannot predict. Such a mechanism is, however, fully consistent with thermodynamics. This example emphasizes the importance of basing Monte Carlo methods on models where the activation energies are rigorously calculated, rather than deduced from widespread heuristic equations.
The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.
Rodgers, J L
1999-10-01
A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.
Deviation of Long-Period Tides from Equilibrium: Kinematics and Geostrophy
NASA Technical Reports Server (NTRS)
Egbert, Gary D.; Ray, Richard D.
2003-01-01
New empirical estimates of the long-period fortnightly (Mf) tide obtained from TOPEX/Poseidon (T/P) altimeter data confirm significant basin-scale deviations from equilibrium. Elevations in the low-latitude Pacific have reduced amplitude and lag those in the Atlantic by 30 deg or more. These interbasin amplitude and phase variations are robust features that are reproduced by numerical solutions of the shallow-water equations, even for a constant-depth ocean with schematic interconnected rectangular basins. A simplified analytical model for cooscillating connected basins also reproduces the principal features observed in the empirical solutions. This simple model is largely kinematic. Zonally averaged elevations within a simple closed basin would be nearly in equilibrium with the gravitational potential, except for a constant offset required to conserve mass. With connected basins these offsets are mostly eliminated by interbasin mass flux. Because of rotation, this flux occurs mostly in a narrow boundary layer across the mouth and at the western edge of each basin, and geostrophic balance in this zone supports small residual offsets (and phase shifts) between basins. The simple model predicts that this effect should decrease roughly linearly with frequency, a result that is confirmed by numerical modeling and empirical T/P estimates of the monthly (Mm) tidal constituent. This model also explains some aspects of the anomalous nonisostatic response of the ocean to atmospheric pressure forcing at periods of around 5 days.
A simple calculation method for determination of equivalent square field.
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-04-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Empirical Study of User Preferences Based on Rating Data of Movies
Zhao, YingSi; Shen, Bo
2016-01-01
User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847
This Ad is for You: Targeting and the Effect of Alcohol Advertising on Youth Drinking.
Molloy, Eamon
2016-02-01
Endogenous targeting of alcohol advertisements presents a challenge for empirically identifying a causal effect of advertising on drinking. Drinkers prefer a particular media; firms recognize this and target alcohol advertising at these media. This paper overcomes this challenge by utilizing novel data with detailed individual measures of media viewing and alcohol consumption and three separate empirical techniques, which represent significant improvements over previous methods. First, controls for the average audience characteristics of the media an individual views account for attributes of magazines and television programs alcohol firms may consider when deciding where to target advertising. A second specification directly controls for each television program and magazine a person views. The third method exploits variation in advertising exposure due to a 2003 change in an industry-wide rule that governs where firms may advertise. Although the unconditional correlation between advertising and drinking by youth (ages 18-24) is strong, models that include simple controls for targeting imply, at most, a modest advertising effect. Although the coefficients are estimated less precisely, estimates with models including more rigorous controls for targeting indicate no significant effect of advertising on youth drinking. Copyright © 2015 John Wiley & Sons, Ltd.
Communication and Organization in Software Development: An Empirical Study
NASA Technical Reports Server (NTRS)
Seaman, Carolyn B.; Basili, Victor R.
1996-01-01
The empirical study described in this paper addresses the issue of communication among members of a software development organization. The independent variables are various attributes of organizational structure. The dependent variable is the effort spent on sharing information which is required by the software development process in use. The research questions upon which the study is based ask whether or not these attributes of organizational structure have an effect on the amount of communication effort expended. In addition, there are a number of blocking variables which have been identified. These are used to account for factors other than organizational structure which may have an effect on communication effort. The study uses both quantitative and qualitative methods for data collection and analysis. These methods include participant observation, structured interviews, and graphical data presentation. The results of this study indicate that several attributes of organizational structure do affect communication effort, but not in a simple, straightforward way. In particular, the distances between communicators in the reporting structure of the organization, as well as in the physical layout of offices, affects how quickly they can share needed information, especially during meetings. These results provide a better understanding of how organizational structure helps or hinders communication in software development.
Hseu, Zeng-Yei; Zehetner, Franz
2014-01-01
This study compared the extractability of Cd, Cu, Ni, Pb, and Zn by 8 extraction protocols for 22 representative rural soils in Taiwan and correlated the extractable amounts of the metals with their uptake by Chinese cabbage for developing an empirical model to predict metal phytoavailability based on soil properties. Chemical agents in these protocols included dilute acids, neutral salts, and chelating agents, in addition to water and the Rhizon soil solution sampler. The highest concentrations of extractable metals were observed in the HCl extraction and the lowest in the Rhizon sampling method. The linear correlation coefficients between extractable metals in soil pools and metals in shoots were higher than those in roots. Correlations between extractable metal concentrations and soil properties were variable; soil pH, clay content, total metal content, and extractable metal concentration were considered together to simulate their combined effects on crop uptake by an empirical model. This combination improved the correlations to different extents for different extraction methods, particularly for Pb, for which the extractable amounts with any extraction protocol did not correlate with crop uptake by simple correlation analysis. PMID:25295297
Empirical Study of User Preferences Based on Rating Data of Movies.
Zhao, YingSi; Shen, Bo
2016-01-01
User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings.
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.
2013-01-01
Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303
A Survey of the Isentropic Euler Vortex Problem Using High-Order Methods
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
The flux reconstruction (FR) method offers a simple, efficient, and easy to implement method, and it has been shown to equate to a differential approach to discontinuous Galerkin (DG) methods. The FR method is also accurate to an arbitrary order and the isentropic Euler vortex problem is used here to empirically verify this claim. This problem is widely used in computational fluid dynamics (CFD) to verify the accuracy of a given numerical method due to its simplicity and known exact solution at any given time. While verifying our FR solver, multiple obstacles emerged that prevented us from achieving the expected order of accuracy over short and long amounts of simulation time. It was found that these complications stemmed from a few overlooked details in the original problem definition combined with the FR and DG methods achieving high-accuracy with minimal dissipation. This paper is intended to consolidate the many versions of the vortex problem found in literature and to highlight some of the consequences if these overlooked details remain neglected.
Digit Reversal in Children's Writing: A Simple Theory and Its Empirical Validation
ERIC Educational Resources Information Center
Fischer, Jean-Paul
2013-01-01
This article presents a simple theory according to which the left-right reversal of single digits by 5- and 6-year-old children is mainly due to the application of an implicit right-writing or -orienting rule. A number of nontrivial predictions can be drawn from this theory. First, left-oriented digits (1, 2, 3, 7, and 9) will be reversed more…
Evidence for the Early Emergence of the Simple View of Reading in a Transparent Orthography
ERIC Educational Resources Information Center
Kendeou, Panayiota; Papadopoulos, Timothy C.; Kotzapoulou, Marianna
2013-01-01
The main aim of the present study was to empirically test the emergence of the Simple View of Reading (SVR) in a transparent orthography, and specifically in Greek. To do so, we examined whether the constituent components of the SVR could be identified in young, Greek-speaking children even before the beginning of formal reading instruction. Our…
ERIC Educational Resources Information Center
Lee, Wan-Fung; Bulcock, Jeffrey Wilson
The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ma, Tianyu; Shao, Yiping
2008-08-01
This work is part of a feasibility study to develop SPECT imaging capability on a lutetium oxyorthosilicate (LSO) based animal PET system. The SPECT acquisition was enabled by inserting a collimator assembly inside the detector ring and acquiring data in singles mode. The same LSO detectors were used for both PET and SPECT imaging. The intrinsic radioactivity of 176Lu in the LSO crystals, however, contaminates the SPECT data, and can generate image artifacts and introduce quantification error. The objectives of this study were to evaluate the effectiveness of a LSO background subtraction method, and to estimate the minimal detectable target activity (MDTA) of image object for SPECT imaging. For LSO background correction, the LSO contribution in an image study was estimated based on a pre-measured long LSO background scan and subtracted prior to the image reconstruction. The MDTA was estimated in two ways. The empirical MDTA (eMDTA) was estimated from screening the tomographic images at different activity levels. The calculated MDTA (cMDTA) was estimated from using a formula based on applying a modified Currie equation on an average projection dataset. Two simulated and two experimental phantoms with different object activity distributions and levels were used in this study. The results showed that LSO background adds concentric ring artifacts to the reconstructed image, and the simple subtraction method can effectively remove these artifacts—the effect of the correction was more visible when the object activity level was near or above the eMDTA. For the four phantoms studied, the cMDTA was consistently about five times of the corresponding eMDTA. In summary, we implemented a simple LSO background subtraction method and demonstrated its effectiveness. The projection-based calculation formula yielded MDTA results that closely correlate with that obtained empirically and may have predicative value for imaging applications.
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; ...
2017-01-03
In this paper, we present a consistent implicit incompressible smoothed particle hydrodynamics (I 2SPH) discretization of Navier–Stokes, Poisson–Boltzmann, and advection–diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The accuracy and convergence of the consistent I 2SPH are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. Lastly, the new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V
2010-12-15
We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol (- 1) for a test set of 120 organic molecules).
Normal uniform mixture differential gene expression detection for cDNA microarrays
Dean, Nema; Raftery, Adrian E
2005-01-01
Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807
NASA Technical Reports Server (NTRS)
Lautenschlager, L.; Perry, C. R., Jr. (Principal Investigator)
1981-01-01
The development of formulae for the reduction of multispectral scanner measurements to a single value (vegetation index) for predicting and assessing vegetative characteristics is addressed. The origin, motivation, and derivation of some four dozen vegetation indices are summarized. Empirical, graphical, and analytical techniques are used to investigate the relationships among the various indices. It is concluded that many vegetative indices are very similar, some being simple algebraic transforms of others.
Hou, Chen; Amunugama, Kaushalya
2015-07-01
The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Statistical Tests Black swans or dragon-kings? A simple test for deviations from the power law★
NASA Astrophysics Data System (ADS)
Janczura, J.; Weron, R.
2012-05-01
We develop a simple test for deviations from power law tails. Actually, from the tails of any distribution. We use this test - which is based on the asymptotic properties of the empirical distribution function - to answer the question whether great natural disasters, financial crashes or electricity price spikes should be classified as dragon-kings or `only' as black swans.
ERIC Educational Resources Information Center
Cadime, Irene; Rodrigues, Bruna; Santos, Sandra; Viana, Fernanda Leopoldina; Chaves-Sousa, Séli; do Céu Cosme, Maria; Ribeiro, Iolanda
2017-01-01
Empirical research has provided evidence for the simple view of reading across a variety of orthographies, but the role of oral reading fluency in the model is unclear. Moreover, the relative weight of listening comprehension, oral reading fluency and word recognition in reading comprehension seems to vary across orthographies and schooling years.…
New Method to Calculate the Time Variation of the Force Field Parameter
NASA Astrophysics Data System (ADS)
Santiago, A.; Lara, A.; Enríquez-Rivera, O.; Caballero-Lopez, R. A.
2018-03-01
Galactic cosmic rays (CRs) entering the heliosphere are affected by interplanetary magnetic fields and solar wind disturbances resulting in the modulation of the CR total flux observed in the inner heliosphere. The so-called force field model is often used to compute the galactic CR spectrum modulated by the solar activity due to the fact that it characterizes this process by only one parameter (the modulation potential, ϕ). In this work, we present two types of an empirical simplification (ES) method used to reconstruct the time variation of the modulation potential (Δϕ). Our ES offers a simple and fast alternative to compute the Δϕ at any desired time. The first ES type is based on the empirical fact that the dependence between Δϕ and neutron monitor (NM) count rates can be parameterized by a second-degree polynomial. The second ES type is based on the assumption that there is a inverse relation between Δϕ and NM count rates. We report the parameters found for the two types, which may be used to compute Δϕ for some NMs in a very fast and efficient way. In order to test the validity of the proposed ES, we compare our results with Δϕ obtained from literature. Finally, we apply our method to obtain the proton and helium spectra of primary CRs near the Earth at four randomly selected times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1987-12-10
A new semi-empirical method, based on the use of the P-factor (P = N/sub p/N/sub n//(N/sub p/+N/sub n/)), is shown to simplify significantly the systematics of atomic masses. Its uses is illustrated for actinide nuclei where complicated patterns of mass systematics seen in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences. The linearization of the systematics by this procedure provides a simple basis for mass prediction. For many unmeasured nuclei beyond the known mass surface, the P-factor method operates by interpolation among data for known nuclei rathermore » than by extrapolation, as is common in other mass models.« less
The Analysis of a Diet for the Human Being and the Companion Animal using Big Data in 2016
Kang, Hye Won
2017-01-01
The purpose of this study was to investigate the diet tendencies of human and companion animals using big data analysis. The keyword data of human diet and companion animals' diet were collected from the portal site Naver from January 1, 2016 until December 31, 2016 and collected data were analyzed by simple frequency analysis, N-gram analysis, keyword network analysis and seasonality analysis. In terms of human, the word exercise had the highest frequency through simple frequency analysis, whereas diet menu most frequently appeared in the N-gram analysis. companion animals, the term dog had the highest frequency in simple frequency analysis, whereas diet method was most frequent through N-gram analysis. Keyword network analysis for human indicated 4 groups: diet group, exercise group, commercial diet food group, and commercial diet program group. However, the keyword network analysis for companion animals indicated 3 groups: diet group, exercise group, and professional medical help group. The analysis of seasonality showed that the interest in diet for both human and companion animals increased steadily since February of 2016 and reached its peak in July. In conclusion, diets of human and companion animals showed similar tendencies, particularly having higher preference for dietary control over other methods. The diets of companion animals are determined by the choice of their owners as effective diet method for owners are usually applied to the companion animals. Therefore, it is necessary to have empirical demonstration of whether correlation of obesity between human being and the companion animals exist. PMID:29124046
The Analysis of a Diet for the Human Being and the Companion Animal using Big Data in 2016.
Jung, Eun-Jin; Kim, Young-Suk; Choi, Jung-Wa; Kang, Hye Won; Chang, Un-Jae
2017-10-01
The purpose of this study was to investigate the diet tendencies of human and companion animals using big data analysis. The keyword data of human diet and companion animals' diet were collected from the portal site Naver from January 1, 2016 until December 31, 2016 and collected data were analyzed by simple frequency analysis, N-gram analysis, keyword network analysis and seasonality analysis. In terms of human, the word exercise had the highest frequency through simple frequency analysis, whereas diet menu most frequently appeared in the N-gram analysis. companion animals, the term dog had the highest frequency in simple frequency analysis, whereas diet method was most frequent through N-gram analysis. Keyword network analysis for human indicated 4 groups: diet group, exercise group, commercial diet food group, and commercial diet program group. However, the keyword network analysis for companion animals indicated 3 groups: diet group, exercise group, and professional medical help group. The analysis of seasonality showed that the interest in diet for both human and companion animals increased steadily since February of 2016 and reached its peak in July. In conclusion, diets of human and companion animals showed similar tendencies, particularly having higher preference for dietary control over other methods. The diets of companion animals are determined by the choice of their owners as effective diet method for owners are usually applied to the companion animals. Therefore, it is necessary to have empirical demonstration of whether correlation of obesity between human being and the companion animals exist.
A simple calculation method for determination of equivalent square field
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-01-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801
On the Measurements of Numerical Viscosity and Resistivity in Eulerian MHD Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rembiasz, Tomasz; Obergaulinger, Martin; Cerdá-Durán, Pablo
2017-06-01
We propose a simple ansatz for estimating the value of the numerical resistivity and the numerical viscosity of any Eulerian MHD code. We test this ansatz with the help of simulations of the propagation of (magneto)sonic waves, Alfvén waves, and the tearing mode (TM) instability using the MHD code Aenus. By comparing the simulation results with analytical solutions of the resistive-viscous MHD equations and an empirical ansatz for the growth rate of TMs, we measure the numerical viscosity and resistivity of Aenus. The comparison shows that the fast magnetosonic speed and wavelength are the characteristic velocity and length, respectively, ofmore » the aforementioned (relatively simple) systems. We also determine the dependence of the numerical viscosity and resistivity on the time integration method, the spatial reconstruction scheme and (to a lesser extent) the Riemann solver employed in the simulations. From the measured results, we infer the numerical resolution (as a function of the spatial reconstruction method) required to properly resolve the growth and saturation level of the magnetic field amplified by the magnetorotational instability in the post-collapsed core of massive stars. Our results show that it is most advantageous to resort to ultra-high-order methods (e.g., the ninth-order monotonicity-preserving method) to tackle this problem properly, in particular, in three-dimensional simulations.« less
Rostami, Javad; Chen, Jingming; Tse, Peter W.
2017-01-01
Ultrasonic guided waves have been extensively applied for non-destructive testing of plate-like structures particularly pipes in past two decades. In this regard, if a structure has a simple geometry, obtained guided waves’ signals are easy to explain. However, any small degree of complexity in the geometry such as contacting with other materials may cause an extra amount of complication in the interpretation of guided wave signals. The problem deepens if defects have irregular shapes such as natural corrosion. Signal processing techniques that have been proposed for guided wave signals’ analysis are generally good for simple signals obtained in a highly controlled experimental environment. In fact, guided wave signals in a real situation such as the existence of natural corrosion in wall-covered pipes are much more complicated. Considering pipes in residential buildings that pass through concrete walls, in this paper we introduced Smooth Empirical Mode Decomposition (SEMD) to efficiently separate overlapped guided waves. As empirical mode decomposition (EMD) which is a good candidate for analyzing non-stationary signals, suffers from some shortcomings, wavelet transform was adopted in the sifting stage of EMD to improve its outcome in SEMD. However, selection of mother wavelet that suits best for our purpose plays an important role. Since in guided wave inspection, the incident waves are well known and are usually tone-burst signals, we tailored a complex tone-burst signal to be used as our mother wavelet. In the sifting stage of EMD, wavelet de-noising was applied to eliminate unwanted frequency components from each IMF. SEMD greatly enhances the performance of EMD in guided wave analysis for highly contaminated signals. In our experiment on concrete covered pipes with natural corrosion, this method not only separates the concrete wall indication clearly in time domain signal, a natural corrosion with complex geometry that was hidden and located inside the concrete section was successfully exposed. PMID:28178220
Semertzidou, P; Piliposian, G T; Appleby, P G
2016-08-01
The residence time of (210)Pb created in the atmosphere by the decay of gaseous (222)Rn is a key parameter controlling its distribution and fallout onto the landscape. These in turn are key parameters governing the use of this natural radionuclide for dating and interpreting environmental records stored in natural archives such as lake sediments. One of the principal methods for estimating the atmospheric residence time is through measurements of the activities of the daughter radionuclides (210)Bi and (210)Po, and in particular the (210)Bi/(210)Pb and (210)Po/(210)Pb activity ratios. Calculations used in early empirical studies assumed that these were governed by a simple series of equilibrium equations. This approach does however have two failings; it takes no account of the effect of global circulation on spatial variations in the activity ratios, and no allowance is made for the impact of transport processes across the tropopause. This paper presents a simple model for calculating the distributions of (210)Pb, (210)Bi and (210)Po at northern mid-latitudes (30°-65°N), a region containing almost all the available empirical data. By comparing modelled (210)Bi/(210)Pb activity ratios with empirical data a best estimate for the tropospheric residence time of around 10 days is obtained. This is significantly longer than earlier estimates of between 4 and 7 days. The process whereby (210)Pb is transported into the stratosphere when tropospheric concentrations are high and returned from it when they are low, significantly increases the effective residence time in the atmosphere as a whole. The effect of this is to significantly enhance the long range transport of (210)Pb from its source locations. The impact is illustrated by calculations showing the distribution of (210)Pb fallout versus longitude at northern mid-latitudes. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.
2018-06-01
Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.
Simulation of upwind maneuvering of a sailing yacht
NASA Astrophysics Data System (ADS)
Harris, Daniel Hartrick
A time domain maneuvering simulation of an IACC class yacht suitable for the analysis of unsteady upwind sailing including tacking is presented. The simulation considers motions in six degrees of freedom. The hydrodynamic and aerodynamic loads are calculated primarily with unsteady potential theory supplemented by empirical viscous models. The hydrodynamic model includes the effects of incident waves. Control of the rudder is provided by a simple rate feedback autopilot which is augmented with open loop additions to mimic human steering. The hydrodynamic models are based on the superposition of force components. These components fall into two groups, those which the yacht will experience in calm water, and those due to incident waves. The calm water loads are further divided into zero Froude number, or "double body" maneuvering loads, hydrostatic loads, gravitational loads, free surface radiation loads, and viscous/residual loads. The maneuvering loads are calculated with an unsteady panel code which treats the instantaneous geometry of the yacht below the undisturbed free surface. The free surface radiation loads are calculated via convolution of impulse response functions derived from seakeeping strip theory. The viscous/residual loads are based upon empirical estimates. The aerodynamic model consists primarily of a database of steady state sail coefficients. These coefficients treat the individual contributions to the total sail force of a number of chordwise strips on both the main and jib. Dynamic effects are modeled by using the instantaneous incident wind velocity and direction as the independent variables for the sail load contribution of each strip. The sail coefficient database was calculated numerically with potential methods and simple empirical viscous corrections. Additional aerodynamic load calculations are made to determine the parasitic contributions of the rig and hull. Validation studies compare the steady sailing hydro and aerodynamic loads, seaway induced motions, added resistance in waves, and tacking performance with trials data and other sources. Reasonable agreement is found in all cases.
Rostami, Javad; Chen, Jingming; Tse, Peter W
2017-02-07
Ultrasonic guided waves have been extensively applied for non-destructive testing of plate-like structures particularly pipes in past two decades. In this regard, if a structure has a simple geometry, obtained guided waves' signals are easy to explain. However, any small degree of complexity in the geometry such as contacting with other materials may cause an extra amount of complication in the interpretation of guided wave signals. The problem deepens if defects have irregular shapes such as natural corrosion. Signal processing techniques that have been proposed for guided wave signals' analysis are generally good for simple signals obtained in a highly controlled experimental environment. In fact, guided wave signals in a real situation such as the existence of natural corrosion in wall-covered pipes are much more complicated. Considering pipes in residential buildings that pass through concrete walls, in this paper we introduced Smooth Empirical Mode Decomposition (SEMD) to efficiently separate overlapped guided waves. As empirical mode decomposition (EMD) which is a good candidate for analyzing non-stationary signals, suffers from some shortcomings, wavelet transform was adopted in the sifting stage of EMD to improve its outcome in SEMD. However, selection of mother wavelet that suits best for our purpose plays an important role. Since in guided wave inspection, the incident waves are well known and are usually tone-burst signals, we tailored a complex tone-burst signal to be used as our mother wavelet. In the sifting stage of EMD, wavelet de-noising was applied to eliminate unwanted frequency components from each IMF. SEMD greatly enhances the performance of EMD in guided wave analysis for highly contaminated signals. In our experiment on concrete covered pipes with natural corrosion, this method not only separates the concrete wall indication clearly in time domain signal, a natural corrosion with complex geometry that was hidden and located inside the concrete section was successfully exposed.
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less
2015-01-01
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956
NASA Astrophysics Data System (ADS)
Palazzi, E.
The evaluation of atmospheric dispersion of a cloud, arising from a sudden release of flammable or toxic materials, is an essential tool for properly designing flares, vents and other safety devices and to quantify the potential risk related to the existing ones or arising from the various kinds of accidents which can occur in chemical plants. Among the methods developed to treat the important case of upward-directed jets, Hoehne's procedure for determining the behaviour and extent of flammability zone is extensively utilized, particularly concerning petrochemical plants. In a previous study, a substantial simplification of the aforesaid procedure was achieved, by correlating the experimental data with an empirical formula, allowing to obtain a mathematical description of the boundaries of the flammable cloud. Following a theoretical approach, a most general model is developed in the present work, applicable to the various kinds of design problems and/or risk evaluation regarding upward-directed releases from high velocity sources. It is also demonstrated that the model gives conservative results, if applied outside the range of the Hoehne's experimental conditions. Moreover, with simple modifications, the same approach could be easily applied to deal with the atmospheric dispersion of anyhow directed releases.
Artifact removal from EEG data with empirical mode decomposition
NASA Astrophysics Data System (ADS)
Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.
2017-03-01
In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
Empirical dual energy calibration (EDEC) for cone-beam computed tomography.
Stenner, Philip; Berkus, Timo; Kachelriess, Marc
2007-09-01
Material-selective imaging using dual energy CT (DECT) relies heavily on well-calibrated material decomposition functions. These require the precise knowledge of the detected x-ray spectra, and even if they are exactly known the reliability of DECT will suffer from scattered radiation. We propose an empirical method to determine the proper decomposition function. In contrast to other decomposition algorithms our empirical dual energy calibration (EDEC) technique requires neither knowledge of the spectra nor of the attenuation coefficients. The desired material-selective raw data p1 and p2 are obtained as functions of the measured attenuation data q1 and q2 (one DECT scan = two raw data sets) by passing them through a polynomial function. The polynomial's coefficients are determined using a general least squares fit based on thresholded images of a calibration phantom. The calibration phantom's dimension should be of the same order of magnitude as the test object, but other than that no assumptions on its exact size or positioning are made. Once the decomposition coefficients are determined DECT raw data can be decomposed by simply passing them through the polynomial. To demonstrate EDEC simulations of an oval CTDI phantom, a lung phantom, a thorax phantom and a mouse phantom were carried out. The method was further verified by measuring a physical mouse phantom, a half-and-half-cylinder phantom and a Yin-Yang phantom with a dedicated in vivo dual source micro-CT scanner. The raw data were decomposed into their components, reconstructed, and the pixel values obtained were compared to the theoretical values. The determination of the calibration coefficients with EDEC is very robust and depends only slightly on the type of calibration phantom used. The images of the test phantoms (simulations and measurements) show a nearly perfect agreement with the theoretical micro values and density values. Since EDEC is an empirical technique it inherently compensates for scatter components. The empirical dual energy calibration technique is a pragmatic, simple, and reliable calibration approach that produces highly quantitative DECT images.
Carvalho, Gustavo A.; Minnett, Peter J.; Banzon, Viva F.; Baringer, Warner; Heil, Cynthia A.
2011-01-01
We present a simple algorithm to identify Karenia brevis blooms in the Gulf of Mexico along the west coast of Florida in satellite imagery. It is based on an empirical analysis of collocated matchups of satellite and in situ measurements. The results of this Empirical Approach is compared to those of a Bio-optical Technique – taken from the published literature – and the Operational Method currently implemented by the NOAA Harmful Algal Bloom Forecasting System for K. brevis blooms. These three algorithms are evaluated using a multi-year MODIS data set (from July, 2002 to October, 2006) and a long-term in situ database. Matchup pairs, consisting of remotely-sensed ocean color parameters and near-coincident field measurements of K. brevis concentration, are used to assess the accuracy of the algorithms. Fair evaluation of the algorithms was only possible in the central west Florida shelf (i.e. between 25.75°N and 28.25°N) during the boreal Summer and Fall months (i.e. July to December) due to the availability of valid cloud-free matchups. Even though the predictive values of the three algorithms are similar, the statistical measure of success in red tide identification (defined as cell counts in excess of 1.5 × 104 cells L−1) varied considerably (sensitivity—Empirical: 86%; Bio-optical: 77%; Operational: 26%), as did their effectiveness in identifying non-bloom cases (specificity—Empirical: 53%; Bio-optical: 65%; Operational: 84%). As the Operational Method had an elevated frequency of false-negative cases (i.e. presented low accuracy in detecting known red tides), and because of the considerable overlap between the optical characteristics of the red tide and non-bloom population, only the other two algorithms underwent a procedure for further inspecting possible detection improvements. Both optimized versions of the Empirical and Bio-optical algorithms performed similarly, being equally specific and sensitive (~70% for both) and showing low levels of uncertainties (i.e. few cases of false-negatives and false-positives: ~30%)—improved positive predictive values (~60%) were also observed along with good negative predictive values (~80%). PMID:22180667
Simplified analysis about horizontal displacement of deep soil under tunnel excavation
NASA Astrophysics Data System (ADS)
Tian, Xiaoyan; Gu, Shuancheng; Huang, Rongbin
2017-11-01
Most of the domestic scholars focus on the study about the law of the soil settlement caused by subway tunnel excavation, however, studies on the law of horizontal displacement are lacking. And it is difficult to obtain the horizontal displacement data of any depth in the project. At present, there are many formulas for calculating the settlement of soil layers. In terms of integral solutions of Mindlin classic elastic theory, stochastic medium theory, source-sink theory, the Peck empirical formula is relatively simple, and also has a strong applicability at home. Considering the incompressibility of rock and soil mass, based on the principle of plane strain, the calculation formula of the horizontal displacement of the soil along the cross section of the tunnel was derived by using the Peck settlement formula. The applicability of the formula is verified by comparing with the existing engineering cases, a simple and rapid analytical method for predicting the horizontal displacement is presented.
An empirical relationship for homogenization in single-phase binary alloy systems
NASA Technical Reports Server (NTRS)
Unnam, J.; Tenney, D. R.; Stein, B. A.
1979-01-01
A semiempirical formula is developed for describing the extent of interaction between constituents in single-phase binary alloy systems with planar, cylindrical, or spherical interfaces. The formula contains two parameters that are functions of mean concentration and interface geometry of the couple. The empirical solution is simple, easy to use, and does not involve sequential calculations, thereby allowing quick estimation of the extent of interactions without lengthy calculations. Results obtained with this formula are in good agreement with those from a finite-difference analysis.
Malthusian dynamics in a diverging Europe: Northern Italy, 1650-1881.
Fernihough, Alan
2013-02-01
Recent empirical research questions the validity of using Malthusian theory in preindustrial England. Using real wage and vital rate data for the years 1650-1881, I provide empirical estimates for a different region: Northern Italy. The empirical methodology is theoretically underpinned by a simple Malthusian model, in which population, real wages, and vital rates are determined endogenously. My findings strongly support the existence of a Malthusian economy wherein population growth decreased living standards, which in turn influenced vital rates. However, these results also demonstrate how the system is best characterized as one of weak homeostasis. Furthermore, there is no evidence of Boserupian effects given that increases in population failed to spur any sustained technological progress.
Generalized emission functions for photon emission from quark-gluon plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanarayana, S. V.
The Landau-Pomeranchuk-Migdal effects on photon emission from the quark-gluon plasma have been studied as a function of photon mass, at a fixed temperature of the plasma. The integral equations for the transverse vector function [f-tilde)(p-tilde){sub (perpendicular)})] and the longitudinal function [g-tilde)(p-tilde){sub (perpendicular)})] consisting of multiple scattering effects are solved by the self-consistent iterations method and also by the variational method for the variable set {l_brace}p{sub 0},q{sub 0},Q{sup 2}{r_brace}. We considered the bremsstrahlung and the off shell annihilation (aws) processes. We define two new dynamical scaling variables, x{sub T},x{sub L}, for bremsstrahlung and aws processes which are functions of variables p{submore » 0},q{sub 0},Q{sup 2}. We define four new emission functions for massive photon emission represented by g{sub T}{sup b},g{sub T}{sup a},g{sub L}{sup b},g{sub L}{sup a} and we constructed these using the exact numerical solutions of the integral equations. These four emission functions have been parametrized by suitable simple empirical fits. Using the empirical emission functions, we calculated the imaginary part of the photon polarization tensor as a function of photon mass and energy.« less
Beyer, Heiko; Liebe, Ulf
2015-01-01
Empirical research on discrimination is faced with crucial problems stemming from the specific character of its object of study. In democratic societies the communication of prejudices and other forms of discriminatory behavior is considered socially undesirable and depends on situational factors such as whether a situation is considered private or whether a discriminatory consensus can be assumed. Regular surveys thus can only offer a blurred picture of the phenomenon. But also survey experiments intended to decrease the social desirability bias (SDB) so far failed in systematically implementing situational variables. This paper introduces three experimental approaches to improve the study of discrimination and other topics of social (un-)desirability. First, we argue in favor of cognitive context framing in surveys in order to operationalize the salience of situational norms. Second, factorial surveys offer a way to take situational contexts and substitute behavior into account. And third, choice experiments - a rather new method in sociology - offer a more valid method of measuring behavioral characteristics compared to simple items in surveys. All three approaches - which may be combined - are easy to implement in large-scale surveys. Results of empirical studies demonstrate the fruitfulness of each of these approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Le, Huy; Schmidt, Frank L.; Harter, James K.; Lauver, Kristy J.
2010-01-01
Construct empirical redundancy may be a major problem in organizational research today. In this paper, we explain and empirically illustrate a method for investigating this potential problem. We applied the method to examine the empirical redundancy of job satisfaction (JS) and organizational commitment (OC), two well-established organizational…
NASA Astrophysics Data System (ADS)
Donahue, William; Newhauser, Wayne D.; Ziegler, James F.
2016-09-01
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Donahue, William; Newhauser, Wayne D; Ziegler, James F
2016-09-07
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Kanematsu, Nobuyuki
2009-03-07
Dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue. This work provides a simple model for such demanding applications. There is an approximate linearity between RMS end-point displacement and range of incident particles in water, empirically found in measurements and detailed calculations. This fact was translated into a simple linear formula, from which the scattering power that is only inversely proportional to the residual range was derived. The simplicity enabled the analytical formulation for ions stopping in water, which was designed to be equivalent with the extended Highland model and agreed with measurements within 2% or 0.02 cm in RMS displacement. The simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity.
Linsenmeyer, Katherine; Strymish, Judith; Gupta, Kalpana
2015-12-01
The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P < 0.001). Genitourinary tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Cluster analysis of multiple planetary flow regimes
NASA Technical Reports Server (NTRS)
Mo, Kingtse; Ghil, Michael
1988-01-01
A modified cluster analysis method developed for the classification of quasi-stationary events into a few planetary flow regimes and for the examination of transitions between these regimes is described. The method was applied first to a simple deterministic model and then to a 500-mbar data set for Northern Hemisphere (NH), for which cluster analysis was carried out in the subspace of the first seven empirical orthogonal functions (EOFs). Stationary clusters were found in the low-frequency band of more than 10 days, while transient clusters were found in the band-pass frequency window between 2.5 and 6 days. In the low-frequency band, three pairs of clusters determined EOFs 1, 2, and 3, respectively; they exhibited well-known regional features, such as blocking, the Pacific/North American pattern, and wave trains. Both model and low-pass data exhibited strong bimodality.
Biosimulation of Inflammation and Healing in Surgically Injured Vocal Folds
Li, Nicole Y. K.; Vodovotz, Yoram; Hebda, Patricia A.; Abbott, Katherine Verdolini
2010-01-01
Objectives The pathogenesis of vocal fold scarring is complex and remains to be deciphered. The current study is part of research endeavors aimed at applying systems biology approaches to address the complex biological processes involved in the pathogenesis of vocal fold scarring and other lesions affecting the larynx. Methods We developed a computational agent-based model (ABM) to quantitatively characterize multiple cellular and molecular interactions involved in inflammation and healing in vocal fold mucosa after surgical trauma. The ABM was calibrated with empirical data on inflammatory mediators (eg, tumor necrosis factor) and extracellular matrix components (eg, hyaluronan) from published studies on surgical vocal fold injury in the rat population. Results The simulation results reproduced and predicted trajectories seen in the empirical data from the animals. Moreover, the ABM studies suggested that hyaluronan fragments might be the clinical surrogate of tissue damage, a key variable that in these simulations both is enhanced by and further induces inflammation. Conclusions A relatively simple ABM such as the one reported in this study can provide new understanding of laryngeal wound healing and generate working hypotheses for further wet-lab studies. PMID:20583741
A simple model for calculating air pollution within street canyons
NASA Astrophysics Data System (ADS)
Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.
2014-04-01
This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.
Gui, Jiang; Moore, Jason H.; Williams, Scott M.; Andrews, Peter; Hillege, Hans L.; van der Harst, Pim; Navis, Gerjan; Van Gilst, Wiek H.; Asselbergs, Folkert W.; Gilbert-Diamond, Diane
2013-01-01
We present an extension of the two-class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of a quantitative trait. The proposed Quantitative MDR (QMDR) method handles continuous data by modifying MDR’s constructive induction algorithm to use a T-test. QMDR replaces the balanced accuracy metric with a T-test statistic as the score to determine the best interaction model. We used a simulation to identify the empirical distribution of QMDR’s testing score. We then applied QMDR to genetic data from the ongoing prospective Prevention of Renal and Vascular End-Stage Disease (PREVEND) study. PMID:23805232
Identifying and quantifying interactions in a laboratory swarm
NASA Astrophysics Data System (ADS)
Puckett, James; Kelley, Douglas; Ouellette, Nicholas
2013-03-01
Emergent collective behavior, such as in flocks of birds or swarms of bees, is exhibited throughout the animal kingdom. Many models have been developed to describe swarming and flocking behavior using systems of self-propelled particles obeying simple rules or interacting via various potentials. However, due to experimental difficulties and constraints, little empirical data exists for characterizing the exact form of the biological interactions. We study laboratory swarms of flying Chironomus riparius midges, using stereoimaging and particle tracking techniques to record three-dimensional trajectories for all the individuals in the swarm. We describe methods to identify and quantify interactions by examining these trajectories, and report results on interaction magnitude, frequency, and mutuality.
Zipf's law holds for phrases, not words.
Williams, Jake Ryland; Lessard, Paul R; Desu, Suma; Clark, Eric M; Bagrow, James P; Danforth, Christopher M; Dodds, Peter Sheridan
2015-08-11
With Zipf's law being originally and most famously observed for word frequency, it is surprisingly limited in its applicability to human language, holding over no more than three to four orders of magnitude before hitting a clear break in scaling. Here, building on the simple observation that phrases of one or more words comprise the most coherent units of meaning in language, we show empirically that Zipf's law for phrases extends over as many as nine orders of rank magnitude. In doing so, we develop a principled and scalable statistical mechanical method of random text partitioning, which opens up a rich frontier of rigorous text analysis via a rank ordering of mixed length phrases.
A numerical method for computing unsteady 2-D boundary layer flows
NASA Technical Reports Server (NTRS)
Krainer, Andreas
1988-01-01
A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.
A Mathematical Model of a Simple Amplifier Using a Ferroelectric Transistor
NASA Technical Reports Server (NTRS)
Sayyah, Rana; Hunt, Mitchell; MacLeod, Todd C.; Ho, Fat D.
2009-01-01
This paper presents a mathematical model characterizing the behavior of a simple amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the amplifier is the basis of many circuit configurations, a mathematical model that describes the behavior of a FeFET-based amplifier will help in the integration of FeFETs into many other circuits.
Development of apparent viscosity test for hot-poured crack sealants.
DOT National Transportation Integrated Search
2008-11-01
Current crack sealant specifications focuses on utilizing simple empirical tests such as penetration, : resilience, flow, and bonding to cement concrete briquettes (ASTM D3405) to measure the ability of the material : to resist cohesive and adhesion ...
Selecting informative subsets of sparse supermatrices increases the chance to find correct trees.
Misof, Bernhard; Meyer, Benjamin; von Reumont, Björn Marcus; Kück, Patrick; Misof, Katharina; Meusemann, Karen
2013-12-03
Character matrices with extensive missing data are frequently used in phylogenomics with potentially detrimental effects on the accuracy and robustness of tree inference. Therefore, many investigators select taxa and genes with high data coverage. Drawbacks of these selections are their exclusive reliance on data coverage without consideration of actual signal in the data which might, thus, not deliver optimal data matrices in terms of potential phylogenetic signal. In order to circumvent this problem, we have developed a heuristics implemented in a software called mare which (1) assesses information content of genes in supermatrices using a measure of potential signal combined with data coverage and (2) reduces supermatrices with a simple hill climbing procedure to submatrices with high total information content. We conducted simulation studies using matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30%. With matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30% Maximum Likelihood (ML) tree reconstructions failed to recover correct trees. A selection of a data subset with the herein proposed approach increased the chance to recover correct partial trees more than 10-fold. The selection of data subsets with the herein proposed simple hill climbing procedure performed well either considering the information content or just a simple presence/absence information of genes. We also applied our approach on an empirical data set, addressing questions of vertebrate systematics. With this empirical dataset selecting a data subset with high information content and supporting a tree with high average boostrap support was most successful if information content of genes was considered. Our analyses of simulated and empirical data demonstrate that sparse supermatrices can be reduced on a formal basis outperforming the usually used simple selections of taxa and genes with high data coverage.
NASA Astrophysics Data System (ADS)
Knipp, D.; Kilcommons, L. M.; Damas, M. C.
2015-12-01
We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
NASA Astrophysics Data System (ADS)
Fairchild, Dustin P.; Narayanan, Ram M.
2012-06-01
The ability to identify human movements can be an important tool in many different applications such as surveillance, military combat situations, search and rescue operations, and patient monitoring in hospitals. This information can provide soldiers, security personnel, and search and rescue workers with critical knowledge that can be used to potentially save lives and/or avoid a dangerous situation. Most research involving human activity recognition is focused on using the Short-Time Fourier Transform (STFT) as a method of analyzing the micro-Doppler signatures. Because of the time-frequency resolution limitations of the STFT and because Fourier transform-based methods are not well-suited for use with non-stationary and nonlinear signals, we have chosen a different approach. Empirical Mode Decomposition (EMD) has been shown to be a valuable time-frequency method for processing non-stationary and nonlinear data such as micro-Doppler signatures and EMD readily provides a feature vector that can be utilized for classification. For classification, the method of a Support Vector Machine (SVMs) was chosen. SVMs have been widely used as a method of pattern recognition due to their ability to generalize well and also because of their moderately simple implementation. In this paper, we discuss the ability of these methods to accurately identify human movements based on their micro-Doppler signatures obtained from S-band and millimeter-wave radar systems. Comparisons will also be made based on experimental results from each of these radar systems. Furthermore, we will present simulations of micro-Doppler movements for stationary subjects that will enable us to compare our experimental Doppler data to what we would expect from an "ideal" movement.
Wilson, Lydia J; Newhauser, Wayne D
2015-01-01
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833
Jagetic, Lydia J; Newhauser, Wayne D
2015-06-21
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.
Floristic composition and across-track reflectance gradient in Landsat images over Amazonian forests
NASA Astrophysics Data System (ADS)
Muro, Javier; doninck, Jasper Van; Tuomisto, Hanna; Higgins, Mark A.; Moulatlet, Gabriel M.; Ruokolainen, Kalle
2016-09-01
Remotely sensed image interpretation or classification of tropical forests can be severely hampered by the effects of the bidirectional reflection distribution function (BRDF). Even for narrow swath sensors like Landsat TM/ETM+, the influence of reflectance anisotropy can be sufficiently strong to introduce a cross-track reflectance gradient. If the BRDF could be assumed to be linear for the limited swath of Landsat, it would be possible to remove this gradient during image preprocessing using a simple empirical method. However, the existence of natural gradients in reflectance caused by spatial variation in floristic composition of the forest can restrict the applicability of such simple corrections. Here we use floristic information over Peruvian and Brazilian Amazonia acquired through field surveys, complemented with information from geological maps, to investigate the interaction of real floristic gradients and the effect of reflectance anisotropy on the observed reflectances in Landsat data. In addition, we test the assumption of linearity of the BRDF for a limited swath width, and whether different primary non-inundated forest types are characterized by different magnitudes of the directional reflectance gradient. Our results show that a linear function is adequate to empirically correct for view angle effects, and that the magnitude of the across-track reflectance gradient is independent of floristic composition in the non-inundated forests we studied. This makes a routine correction of view angle effects possible. However, floristic variation complicates the issue, because different forest types have different mean reflectances. This must be taken into account when deriving the correction function in order to avoid eliminating natural gradients.
Filtration of human EEG recordings from physiological artifacts with empirical mode method
NASA Astrophysics Data System (ADS)
Grubov, Vadim V.; Runnova, Anastasiya E.; Khramova, Marina V.
2017-03-01
In the paper we propose the new method for dealing with noise and physiological artifacts in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We consider noises and physiological artifacts on EEG as specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from eye-moving artifacts and show high efficiency of the method.
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Forecasting runoff from Pennsylvania landscapes
USDA-ARS?s Scientific Manuscript database
Identifying sites prone to surface runoff has been a cornerstone of conservation and nutrient management programs, relying upon site assessment tools that support strategic, as opposed to operational, decision making. We sought to develop simple, empirical models to represent two highly different me...
Shear in high strength concrete bridge girders : technical report.
DOT National Transportation Integrated Search
2013-04-01
Prestressed Concrete (PC) I-girders are used extensively as the primary superstructure components in Texas highway bridges. : A simple semi-empirical equation was developed at the University of Houston (UH) to predict the shear strength of PC I-girde...
Response Functions for Neutron Skyshine Analyses
NASA Astrophysics Data System (ADS)
Gui, Ah Auu
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.
Hoo, Zhe Hui; Curley, Rachael; Campbell, Michael J; Walters, Stephen J; Hind, Daniel; Wildman, Martin J
2016-01-01
Background Preventative inhaled treatments in cystic fibrosis will only be effective in maintaining lung health if used appropriately. An accurate adherence index should therefore reflect treatment effectiveness, but the standard method of reporting adherence, that is, as a percentage of the agreed regimen between clinicians and people with cystic fibrosis, does not account for the appropriateness of the treatment regimen. We describe two different indices of inhaled therapy adherence for adults with cystic fibrosis which take into account effectiveness, that is, “simple” and “sophisticated” normative adherence. Methods to calculate normative adherence Denominator adjustment involves fixing a minimum appropriate value based on the recommended therapy given a person’s characteristics. For simple normative adherence, the denominator is determined by the person’s Pseudomonas status. For sophisticated normative adherence, the denominator is determined by the person’s Pseudomonas status and history of pulmonary exacerbations over the previous year. Numerator adjustment involves capping the daily maximum inhaled therapy use at 100% so that medication overuse does not artificially inflate the adherence level. Three illustrative cases Case A is an example of inhaled therapy under prescription based on Pseudomonas status resulting in lower simple normative adherence compared to unadjusted adherence. Case B is an example of inhaled therapy under-prescription based on previous exacerbation history resulting in lower sophisticated normative adherence compared to unadjusted adherence and simple normative adherence. Case C is an example of nebulizer overuse exaggerating the magnitude of unadjusted adherence. Conclusion Different methods of reporting adherence can result in different magnitudes of adherence. We have proposed two methods of standardizing the calculation of adherence which should better reflect treatment effectiveness. The value of these indices can be tested empirically in clinical trials in which there is careful definition of treatment regimens related to key patient characteristics, alongside accurate measurement of health outcomes. PMID:27284242
Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module
NASA Astrophysics Data System (ADS)
Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge
2017-04-01
A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.
Cooling tower plume - model and experiment
NASA Astrophysics Data System (ADS)
Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri
The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.
Moderation analysis with missing data in the predictors.
Zhang, Qian; Wang, Lijuan
2017-12-01
The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
COUSCOus: improved protein contact prediction using an empirical Bayes covariance estimator.
Rawi, Reda; Mall, Raghvendra; Kunji, Khalid; El Anbari, Mohammed; Aupetit, Michael; Ullah, Ehsan; Bensmail, Halima
2016-12-15
The post-genomic era with its wealth of sequences gave rise to a broad range of protein residue-residue contact detecting methods. Although various coevolution methods such as PSICOV, DCA and plmDCA provide correct contact predictions, they do not completely overlap. Hence, new approaches and improvements of existing methods are needed to motivate further development and progress in the field. We present a new contact detecting method, COUSCOus, by combining the best shrinkage approach, the empirical Bayes covariance estimator and GLasso. Using the original PSICOV benchmark dataset, COUSCOus achieves mean accuracies of 0.74, 0.62 and 0.55 for the top L/10 predicted long, medium and short range contacts, respectively. In addition, COUSCOus attains mean areas under the precision-recall curves of 0.25, 0.29 and 0.30 for long, medium and short contacts and outperforms PSICOV. We also observed that COUSCOus outperforms PSICOV w.r.t. Matthew's correlation coefficient criterion on full list of residue contacts. Furthermore, COUSCOus achieves on average 10% more gain in prediction accuracy compared to PSICOV on an independent test set composed of CASP11 protein targets. Finally, we showed that when using a simple random forest meta-classifier, by combining contact detecting techniques and sequence derived features, PSICOV predictions should be replaced by the more accurate COUSCOus predictions. We conclude that the consideration of superior covariance shrinkage approaches will boost several research fields that apply the GLasso procedure, amongst the presented one of residue-residue contact prediction as well as fields such as gene network reconstruction.
DOT National Transportation Integrated Search
1980-06-01
Volume 5 evaluates empirical methods in tunneling. Empirical methods that avoid the use of an explicit model by relating ground conditions to observed prototype behavior have played a major role in tunnel design. The main objective of this volume is ...
Evaluation of normalization methods in mammalian microRNA-Seq data
Garmire, Lana Xia; Subramaniam, Shankar
2012-01-01
Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701
Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods
NASA Astrophysics Data System (ADS)
Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.
2017-04-01
In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.
Simple uncertainty propagation for early design phase aircraft sizing
NASA Astrophysics Data System (ADS)
Lenz, Annelise
Many designers and systems analysts are aware of the uncertainty inherent in their aircraft sizing studies; however, few incorporate methods to address and quantify this uncertainty. Many aircraft design studies use semi-empirical predictors based on a historical database and contain uncertainty -- a portion of which can be measured and quantified. In cases where historical information is not available, surrogate models built from higher-fidelity analyses often provide predictors for design studies where the computational cost of directly using the high-fidelity analyses is prohibitive. These surrogate models contain uncertainty, some of which is quantifiable. However, rather than quantifying this uncertainty, many designers merely include a safety factor or design margin in the constraints to account for the variability between the predicted and actual results. This can become problematic if a designer does not estimate the amount of variability correctly, which then can result in either an "over-designed" or "under-designed" aircraft. "Under-designed" and some "over-designed" aircraft will likely require design changes late in the process and will ultimately require more time and money to create; other "over-designed" aircraft concepts may not require design changes, but could end up being more costly than necessary. Including and propagating uncertainty early in the design phase so designers can quantify some of the errors in the predictors could help mitigate the extent of this additional cost. The method proposed here seeks to provide a systematic approach for characterizing a portion of the uncertainties that designers are aware of and propagating it throughout the design process in a procedure that is easy to understand and implement. Using Monte Carlo simulations that sample from quantified distributions will allow a systems analyst to use a carpet plot-like approach to make statements like: "The aircraft is 'P'% likely to weigh 'X' lbs or less, given the uncertainties quantified" without requiring the systems analyst to have substantial knowledge of probabilistic methods. A semi-empirical sizing study of a small single-engine aircraft serves as an example of an initial version of this simple uncertainty propagation. The same approach is also applied to a variable-fidelity concept study using a NASA-developed transonic Hybrid Wing Body aircraft.
USDA-ARS?s Scientific Manuscript database
Different measurement techniques for aerosol characterization and quantification either directly or indirectly measure different aerosol properties (i.e. count, mass, speciation, etc.). Comparisons and combinations of multiple measurement techniques sampling the same aerosol can provide insight into...
DOT National Transportation Integrated Search
2014-04-01
The Federal Highway Administrations 1995-1997 National Pavement Design Review found that nearly 80 percent of : states use the 1972, 1986, or 1993 AASHTO Design Guides. These design guides are relying on empirical relationships : between paving ma...
U.S. ENVIRONMENTAL PROTECTION AGENCY'S LANDFILL GAS EMISSION MODEL (LANDGEM)
The paper discusses EPA's available software for estimating landfill gas emissions. This software is based on a first-order decomposition rate equation using empirical data from U.S. landfills. The software provides a relatively simple approach to estimating landfill gas emissi...
ASSESSMENT OF SPATIAL AUTOCORRELATION IN EMPIRICAL MODELS IN ECOLOGY
Statistically assessing ecological models is inherently difficult because data are autocorrelated and this autocorrelation varies in an unknown fashion. At a simple level, the linking of a single species to a habitat type is a straightforward analysis. With some investigation int...
NASA Astrophysics Data System (ADS)
Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min
2017-10-01
To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.
Ritchie, J Brendan; Carlson, Thomas A
2016-01-01
A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.
Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru
NASA Astrophysics Data System (ADS)
Manzanas, R.; Gutiérrez, J. M.
2018-05-01
This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.
NASA Astrophysics Data System (ADS)
Malmi, Lauri; Adawi, Tom; Curmi, Ronald; de Graaff, Erik; Duffy, Gavin; Kautz, Christian; Kinnunen, Päivi; Williams, Bill
2018-03-01
We investigated research processes applied in recent publications in the European Journal of Engineering Education (EJEE), exploring how papers link to theoretical work and how research processes have been designed and reported. We analysed all 155 papers published in EJEE in 2009, 2010 and 2013, classifying the papers using a taxonomy of research processes in engineering education research (EER) (Malmi et al. 2012). The majority of the papers presented either empirical work (59%) or were case reports (27%). Our main findings are as follows: (1) EJEE papers build moderately on a wide selection of theoretical work; (2) a great majority of papers have a clear research strategy, but data analysis methods are mostly simple descriptive statistics or simple/undocumented qualitative research methods; and (3) there are significant shortcomings in reporting research questions, methodology and limitations of studies. Our findings are consistent with and extend analyses of EER papers in other publishing venues; they help to build a clearer picture of the research currently published in EJEE and allow us to make recommendations for consideration by the editorial team of the journal. Our employed procedure also provides a framework that can be applied to monitor future global evolution of this and other EER journals.
An experimental system for spectral line ratio measurements in the TJ-II stellarator.
Zurro, B; Baciero, A; Fontdecaba, J M; Peláez, R; Jiménez-Rey, D
2008-10-01
The chord-integrated emissions of spectral lines have been monitored in the TJ-II stellarator by using a spectral system with time and space scanning capabilities and relative calibration over the entire UV-visible spectral range. This system has been used to study the line ratio of lines of different ionization stages of carbon (C(5+) 5290 A and C(4+) 2271 A) for plasma diagnostic purposes. The local emissivity of these ions has been reconstructed, for quasistationary profiles, by means of the inversion Fisher method described previously. The experimental line ratio is being empirically studied and in parallel a simple spectroscopic model has been developed to account for that ratio. We are investigating whether the role played by charge exchange processes with neutrals and the existence of non-Maxwellian electrons, intrinsic to Electron Cyclotron Resonance Heating (ECRH) heating, leave any distinguishable mark on this diagnostic method.
Compact 0-complete trees: A new method for searching large files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orlandic, R.; Pfaltz, J.L.
1988-01-26
In this report, a novel approach to ordered retrieval in very large files is developed. The method employs a B-tree like search algorithm that is independent of key type or key length because all keys in index blocks are encoded by a 1 byte surrogate. The replacement of actual key sequences by the 1 byte surrogate ensures a maximal possible fan out and greatly reduces the storage overhead of maintaining access indices. Initially, retrieval in binary trie structure is developed. With the aid of a fairly complex recurrence relation, the rather scraggly binary trie is transformed into compact multi-way searchmore » tree. Then the recurrence relation itself is replaced by an unusually simple search algorithm. Then implementation details and empirical performance results are presented. Reduction of index size by 50%--75% opens up the possibility of replicating system-wide indices for parallel access in distributed databases. 23 figs.« less
A Genealogical Interpretation of Principal Components Analysis
McVean, Gil
2009-01-01
Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557
Optimizing the rapid measurement of detection thresholds in infants
Jones, Pete R.; Kalwarowsky, Sarah; Braddick, Oliver J.; Atkinson, Janette; Nardini, Marko
2015-01-01
Accurate measures of perceptual threshold are difficult to obtain in infants. In a clinical context, the challenges are particularly acute because the methods must yield meaningful results quickly and within a single individual. The present work considers how best to maximize speed, accuracy, and reliability when testing infants behaviorally and suggests some simple principles for improving test efficiency. Monte Carlo simulations, together with empirical (visual acuity) data from 65 infants, are used to demonstrate how psychophysical methods developed with adults can produce misleading results when applied to infants. The statistical properties of an effective clinical infant test are characterized, and based on these, it is shown that (a) a reduced (false-positive) guessing rate can greatly increase test efficiency, (b) the ideal threshold to target is often below 50% correct, and (c) simply taking the max correct response can often provide the best measure of an infant's perceptual sensitivity. PMID:26237298
Absolute Measurement of the Refractive Index of Water by a Mode-Locked Laser at 518 nm.
Meng, Zhaopeng; Zhai, Xiaoyu; Wei, Jianguo; Wang, Zhiyang; Wu, Hanzhong
2018-04-09
In this paper, we demonstrate a method using a frequency comb, which can precisely measure the refractive index of water. We have developed a simple system, in which a Michelson interferometer is placed into a quartz-glass container with a low expansion coefficient, and for which compensation of the thermal expansion of the water container is not required. By scanning a mirror on a moving stage, a pair of cross-correlation patterns can be generated. We can obtain the length information via these cross-correlation patterns, with or without water in the container. The refractive index of water can be measured by the resulting lengths. Long-term experimental results show that our method can measure the refractive index of water with a high degree of accuracy-measurement uncertainty at 10 -5 level has been achieved, compared with the values calculated by the empirical formula.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Sun, J
1995-09-01
In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.
Minimizing conflicts: A heuristic repair method for constraint-satisfaction and scheduling problems
NASA Technical Reports Server (NTRS)
Minton, Steve; Johnston, Mark; Philips, Andrew; Laird, Phil
1992-01-01
This paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n-queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective.
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
Toward a Social Psychophysics of Face Communication.
Jack, Rachael E; Schyns, Philippe G
2017-01-03
As a highly social species, humans are equipped with a powerful tool for social communication-the face. Although seemingly simple, the human face can elicit multiple social perceptions due to the rich variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional methods. In the past decade, the emerging field of social psychophysics has developed new methods to address this challenge, with the potential to transfer psychophysical laws of social perception to the digital economy via avatars and social robots. At this exciting juncture, it is timely to review these new methodological developments. In this article, we introduce and review the foundational methodological developments of social psychophysics, present work done in the past decade that has advanced understanding of the face as a tool for social communication, and discuss the major challenges that lie ahead.
Absolute Measurement of the Refractive Index of Water by a Mode-Locked Laser at 518 nm
Meng, Zhaopeng; Zhai, Xiaoyu; Wei, Jianguo; Wang, Zhiyang; Wu, Hanzhong
2018-01-01
In this paper, we demonstrate a method using a frequency comb, which can precisely measure the refractive index of water. We have developed a simple system, in which a Michelson interferometer is placed into a quartz-glass container with a low expansion coefficient, and for which compensation of the thermal expansion of the water container is not required. By scanning a mirror on a moving stage, a pair of cross-correlation patterns can be generated. We can obtain the length information via these cross-correlation patterns, with or without water in the container. The refractive index of water can be measured by the resulting lengths. Long-term experimental results show that our method can measure the refractive index of water with a high degree of accuracy—measurement uncertainty at 10−5 level has been achieved, compared with the values calculated by the empirical formula. PMID:29642518
Chaudhari, Mangesh I; Muralidharan, Ajay; Pratt, Lawrence R; Rempe, Susan B
2018-02-12
Progress in understanding liquid ethylene carbonate (EC) and propylene carbonate (PC) on the basis of molecular simulation, emphasizing simple models of interatomic forces, is reviewed. Results on the bulk liquids are examined from the perspective of anticipated applications to materials for electrical energy storage devices. Preliminary results on electrochemical double-layer capacitors based on carbon nanotube forests and on model solid-electrolyte interphase (SEI) layers of lithium ion batteries are considered as examples. The basic results discussed suggest that an empirically parameterized, non-polarizable force field can reproduce experimental structural, thermodynamic, and dielectric properties of EC and PC liquids with acceptable accuracy. More sophisticated force fields might include molecular polarizability and Buckingham-model description of inter-atomic overlap repulsions as extensions to Lennard-Jones models of van der Waals interactions. Simple approaches should be similarly successful also for applications to organic molecular ions in EC/PC solutions, but the important case of Li[Formula: see text] deserves special attention because of the particularly strong interactions of that small ion with neighboring solvent molecules. To treat the Li[Formula: see text] ions in liquid EC/PC solutions, we identify interaction models defined by empirically scaled partial charges for ion-solvent interactions. The empirical adjustments use more basic inputs, electronic structure calculations and ab initio molecular dynamics simulations, and also experimental results on Li[Formula: see text] thermodynamics and transport in EC/PC solutions. Application of such models to the mechanism of Li[Formula: see text] transport in glassy SEI models emphasizes the advantage of long time-scale molecular dynamics studies of these non-equilibrium materials.
Understanding hind limb lameness signs in horses using simple rigid body mechanics.
Starke, S D; May, S A; Pfau, T
2015-09-18
Hind limb lameness detection in horses relies on the identification of movement asymmetry which can be based on multiple pelvic landmarks. This study explains the poorly understood relationship between hind limb lameness pointers, related to the tubera coxae and sacrum, based on experimental data in context of a simple rigid body model. Vertical displacement of tubera coxae and sacrum was quantified experimentally in 107 horses with varying lameness degrees. A geometrical rigid-body model of pelvis movement during lameness was created in Matlab. Several asymmetry measures were calculated and contrasted. Results showed that model predictions for tubera coxae asymmetry during lameness matched experimental observations closely. Asymmetry for sacrum and comparative tubera coxae movement showed a strong association both empirically (R(2)≥ 0.92) and theoretically. We did not find empirical or theoretical evidence for a systematic, pronounced adaptation in the pelvic rotation pattern with increasing lameness. The model showed that the overall range of movement between tubera coxae does not allow the appreciation of asymmetry changes beyond mild lameness. When evaluating movement relative to the stride cycle we did find empirical evidence for asymmetry being slightly more visible when comparing tubera coxae amplitudes rather than sacrum amplitudes, although variation exists for mild lameness. In conclusion, the rigidity of the equine pelvis results in tightly linked movement trajectories of different pelvic landmarks. The model allows the explanation of empirical observations in the context of the underlying mechanics, helping the identification of potentially limited assessment choices when evaluating gait. Copyright © 2015 Elsevier Ltd. All rights reserved.
Two-state model based on the block-localized wave function method
NASA Astrophysics Data System (ADS)
Mo, Yirong
2007-06-01
The block-localized wave function (BLW) method is a variant of ab initio valence bond method but retains the efficiency of molecular orbital methods. It can derive the wave function for a diabatic (resonance) state self-consistently and is available at the Hartree-Fock (HF) and density functional theory (DFT) levels. In this work we present a two-state model based on the BLW method. Although numerous empirical and semiempirical two-state models, such as the Marcus-Hush two-state model, have been proposed to describe a chemical reaction process, the advantage of this BLW-based two-state model is that no empirical parameter is required. Important quantities such as the electronic coupling energy, structural weights of two diabatic states, and excitation energy can be uniquely derived from the energies of two diabatic states and the adiabatic state at the same HF or DFT level. Two simple examples of formamide and thioformamide in the gas phase and aqueous solution were presented and discussed. The solvation of formamide and thioformamide was studied with the combined ab initio quantum mechanical and molecular mechanical Monte Carlo simulations, together with the BLW-DFT calculations and analyses. Due to the favorable solute-solvent electrostatic interaction, the contribution of the ionic resonance structure to the ground state of formamide and thioformamide significantly increases, and for thioformamide the ionic form is even more stable than the covalent form. Thus, thioformamide in aqueous solution is essentially ionic rather than covalent. Although our two-state model in general underestimates the electronic excitation energies, it can predict relative solvatochromic shifts well. For instance, the intense π →π* transition for formamide upon solvation undergoes a redshift of 0.3eV, compared with the experimental data (0.40-0.5eV).
Variance-based selection may explain general mating patterns in social insects.
Rueppell, Olav; Johnson, Nels; Rychtár, Jan
2008-06-23
Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Empirical evaluation of the market price of risk using the CIR model
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Torosantucci, L.; Uboldi, A.
2007-03-01
We describe a simple but effective method for the estimation of the market price of risk. The basic idea is to compare the results obtained by following two different approaches in the application of the Cox-Ingersoll-Ross (CIR) model. In the first case, we apply the non-linear least squares method to cross sectional data (i.e., all rates of a single day). In the second case, we consider the short rate obtained by means of the first procedure as a proxy of the real market short rate. Starting from this new proxy, we evaluate the parameters of the CIR model by means of martingale estimation techniques. The estimate of the market price of risk is provided by comparing results obtained with these two techniques, since this approach makes possible to isolate the market price of risk and evaluate, under the Local Expectations Hypothesis, the risk premium given by the market for different maturities. As a test case, we apply the method to data of the European Fixed Income Market.
Confidence intervals for a difference between lognormal means in cluster randomization trials.
Poirier, Julia; Zou, G Y; Koval, John
2017-04-01
Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.
Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.
Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz
2014-04-21
We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.
High fidelity studies of exploding foil initiator bridges, Part 1: Experimental method
NASA Astrophysics Data System (ADS)
Bowden, Mike; Neal, William
2017-01-01
Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage and in the case of EFIs, flyer velocity. Correspondingly, experimental methods have in general been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, predicting a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately validated. In this first paper of a three part study, the experimental method for determining the current, voltage, flyer velocity and multi-dimensional profile of detonator components is presented. This improved capability, along with high fidelity simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.
NASA Astrophysics Data System (ADS)
Klibanov, Michael V.; Kuzhuget, Andrey V.; Golubnichiy, Kirill V.
2016-01-01
A new empirical mathematical model for the Black-Scholes equation is proposed to forecast option prices. This model includes new interval for the price of the underlying stock, new initial and new boundary conditions. Conventional notions of maturity time and strike prices are not used. The Black-Scholes equation is solved as a parabolic equation with the reversed time, which is an ill-posed problem. Thus, a regularization method is used to solve it. To verify the validity of our model, real market data for 368 randomly selected liquid options are used. A new trading strategy is proposed. Our results indicates that our method is profitable on those options. Furthermore, it is shown that the performance of two simple extrapolation-based techniques is much worse. We conjecture that our method might lead to significant profits of those financial insitutions which trade large amounts of options. We caution, however, that further studies are necessary to verify this conjecture.
Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Good, Brian; Ferrante, John
1996-01-01
Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
A simple threshold rule is sufficient to explain sophisticated collective decision-making.
Robinson, Elva J H; Franks, Nigel R; Ellis, Samuel; Okuda, Saki; Marshall, James A R
2011-01-01
Decision-making animals can use slow-but-accurate strategies, such as making multiple comparisons, or opt for simpler, faster strategies to find a 'good enough' option. Social animals make collective decisions about many group behaviours including foraging and migration. The key to the collective choice lies with individual behaviour. We present a case study of a collective decision-making process (house-hunting ants, Temnothorax albipennis), in which a previously proposed decision strategy involved both quality-dependent hesitancy and direct comparisons of nests by scouts. An alternative possible decision strategy is that scouting ants use a very simple quality-dependent threshold rule to decide whether to recruit nest-mates to a new site or search for alternatives. We use analytical and simulation modelling to demonstrate that this simple rule is sufficient to explain empirical patterns from three studies of collective decision-making in ants, and can account parsimoniously for apparent comparison by individuals and apparent hesitancy (recruitment latency) effects, when available nests differ strongly in quality. This highlights the need to carefully design experiments to detect individual comparison. We present empirical data strongly suggesting that best-of-n comparison is not used by individual ants, although individual sequential comparisons are not ruled out. However, by using a simple threshold rule, decision-making groups are able to effectively compare options, without relying on any form of direct comparison of alternatives by individuals. This parsimonious mechanism could promote collective rationality in group decision-making.
A simple model of bipartite cooperation for ecological and organizational networks.
Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian
2009-01-22
In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.
Outcome-Dependent Sampling with Interval-Censored Failure Time Data
Zhou, Qingning; Cai, Jianwen; Zhou, Haibo
2017-01-01
Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664
[Describe and convince: visual rhetoric of cinematography in medicine].
Panese, Francesco
2009-01-01
The tools of visualisation occupy a central place in medicine. Far from being simple accessories of glance, they literally constitute objects of medicine. Such empirical acknowledgement and epistemological position open a vast field of investigation: visual technologies of medical knowledge. This article studies the development and transformation of medical objects which have permitted to assess the role of temporality in the epistemology of medicine. It firstly examines the general problem of the relationships between cinema, animated image and medicine and secondly, the contribution of the German doctor Martin Weiser to medical cinematography as a method. Finally, a typology is sketched out organising the variety of the visual technology of movement under the perspective of the development of specific visual techniques in medicine.
NASA Technical Reports Server (NTRS)
Bates, Kevin R.; Scuseria, Gustavo E.
1998-01-01
Multi-layered round carbon particles (onions) containing tens to hundreds of thousands of atoms form during electron irradiation of graphite. However. theoretical models or large icosahedral fullerenes predict highly faceted shapes for molecules with more than a few hundred atoms. This discrepancy in shape may be explained by the presence of defects during the formation of carbon onions. Here, we use the semi-empirical tight-binding method for carbon to simulate the incorporation of pentagon-heptagon defects on to the surface of large icosahedral fullerenes. We show a simple mechanism that results in energetically competitive derivative structures and a global change in molecular shape from faceted to round. Our results provide a plausible explanation of the apparent discrepancy between experimental observations or round buckyonions and theoretical predictions of faceted icosahedral fullerenes.
DUTIR at TREC 2009: Chemical IR Track
2009-11-01
We set the Dirichlet prior empirically at 1,500 as recommended in [2]. For example, Topic 15 “ Betaines for peripheral arterial disease” is...converted into the following Indri query: # (combine betaines for peripheral arterial disease ) which produces results rank-equivalent to a simple query
Empirical Studies of Patterning
ERIC Educational Resources Information Center
Pasnak, Robert
2017-01-01
Young children have been taught simple sequences of alternating shapes and colors, referred to as "patterning", for the past half century in the hope that their understanding of pre-algebra and their mathematics achievement would be improved. The evidence that such patterning instruction actually improves children's academic achievement…
Digit reversal in children's writing: a simple theory and its empirical validation.
Fischer, Jean-Paul
2013-06-01
This article presents a simple theory according to which the left-right reversal of single digits by 5- and 6-year-old children is mainly due to the application of an implicit right-writing or -orienting rule. A number of nontrivial predictions can be drawn from this theory. First, left-oriented digits (1, 2, 3, 7, and 9) will be reversed more frequently than the other asymmetrical digits (4, 5, and 6). Second, for some pairs of digits, the correct writing of the preceding digit will statistically predict the reversal of the current digit and vice versa. Third, writing hand will have little effect on the frequency of reversals, and the relative frequencies with which children reverse the asymmetrical digits will be similar regardless of children's preferred writing hand. Fourth, children who reverse the left-oriented digits the most are also those who reverse the other asymmetrical digits the least. An empirical study involving 367 5- and 6-year-olds confirmed these predictions. Copyright © 2013 Elsevier Inc. All rights reserved.
A Simple Principled Approach for Modeling and Understanding Uniform Color Metrics
Smet, Kevin A.G.; Webster, Michael A.; Whitehead, Lorne A.
2016-01-01
An important goal in characterizing human color vision is to order color percepts in a way that captures their similarities and differences. This has resulted in the continuing evolution of “uniform color spaces,” in which the distances within the space represent the perceptual differences between the stimuli. While these metrics are now very successful in predicting how color percepts are scaled, they do so in largely empirical, ad hoc ways, with limited reference to actual mechanisms of color vision. In this article our aim is to instead begin with general and plausible assumptions about color coding, and then develop a model of color appearance that explicitly incorporates them. We show that many of the features of empirically-defined color order systems (such as those of Munsell, Pantone, NCS, and others) as well as many of the basic phenomena of color perception, emerge naturally from fairly simple principles of color information encoding in the visual system and how it can be optimized for the spectral characteristics of the environment. PMID:26974939
Measures against mechanical noise from large wind turbines: A design guide
NASA Astrophysics Data System (ADS)
Ljunggren, Sten; Johansson, Melker
1991-06-01
The noise generated by the machinery of the two Swedish prototypes contains pure tones which are very important with respect to the environmental impact. A discussion of the results of noise measurements carried out at these turbines, that are meant to be used as a guide as to how to predict and control the noise around a large wind turbine during the design stage, is presented. The design targets are discussed, stressing the importance of the audibility of pure tones and not only the annoyance; a simple criterion is cited. The main noise source is the gearbox and a simple empirical expression for the sound power level is shown to give good agreement with the measurement results. The influence of the noise of the gearbox design is discussed in some detail. Formulas for the prediction of the airborne sound transmission to the ground outside the nacelle are presented, together with a number of empirical data on the sound reduction indices for single and double constructions. The structure-borne noise transmission is discussed.
Risko, Evan F.; Laidlaw, Kaitlin E. W.; Freeth, Megan; Foulsham, Tom; Kingstone, Alan
2012-01-01
Cognitive neuroscientists often study social cognition by using simple but socially relevant stimuli, such as schematic faces or images of other people. Whilst this research is valuable, important aspects of genuine social encounters are absent from these studies, a fact that has recently drawn criticism. In the present review we argue for an empirical approach to the determination of the equivalence of different social stimuli. This approach involves the systematic comparison of different types of social stimuli ranging in their approximation to a real social interaction. In garnering support for this cognitive ethological approach, we focus on recent research in social attention that has involved stimuli ranging from simple schematic faces to real social interactions. We highlight both meaningful similarities and differences in various social attentional phenomena across these different types of social stimuli thus validating the utility of the research initiative. Furthermore, we argue that exploring these similarities and differences will provide new insights into social cognition and social neuroscience. PMID:22654747
How rational should bioethics be? The value of empirical approaches.
Alvarez, A A
2001-10-01
Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.
Rotation of a spheroid in a Couette flow at moderate Reynolds numbers.
Yu, Zhaosheng; Phan-Thien, Nhan; Tanner, Roger I
2007-08-01
The rotation of a single spheroid in a planar Couette flow as a model for simple shear flow is numerically simulated with the distributed Lagrangian multiplier based fictitious domain method. The study is focused on the effects of inertia on the orbital behavior of prolate and oblate spheroids. The numerical orbits are found to be well described by a simple empirical model, which states that the rate of the spheroid rotation about the vorticity axis is a sinusoidal function of the corresponding projection angle in the flow-gradient plane, and that the exponential growth rate of the orbit function is a constant. The following transitions in the steady state with increasing Reynolds number are identified: Jeffery orbit, tumbling, quasi-Jeffery orbit, log rolling, and inclined rolling for a prolate spheroid; and Jeffery orbit, log rolling, inclined rolling, and motionless state for an oblate spheroid. In addition, it is shown that the orbit behavior is sensitive to the initial orientation in the case of strong inertia and there exist different steady states for certain shear Reynolds number regimes.
Empirical Reference Distributions for Networks of Different Size
Smith, Anna; Calder, Catherine A.; Browning, Christopher R.
2016-01-01
Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556
Kumar, D Ashok; Anburajan, M
2014-05-01
Osteoporosis is recognized as a worldwide skeletal disorder problem. In India, the older as well as postmenopausal women population suffering from osteoporotic fractures has been a common issue. Bone mineral density measurements gauged by dual-energy X-ray absorptiometry (DXA) are used in the diagnosis of osteoporosis. (1) To evaluate osteoporosis in south Indian women by radiogrammetric method in a comparative perspective with DXA. (2) To assess the capability of KJH; Anburajan's Empirical formula in the prediction of total hip bone mineral density (T.BMD) with estimated Hologic T.BMD. In this cross-sectional design, 56 south Indian women were evaluated. These women were randomly selected from a health camp. The patients with secondary bone diseases were excluded. The standard protocol was followed in acquiring BMD of the right proximal femur by DPX Prodigy (DXA Scanner, GE-Lunar Corp., USA). The measured Lunar Total hip BMD was converted into estimated Hologic Total hip BMD. In addition, the studied population underwent chest and hip radiographic measurements. Combined cortical thickness of clavicle has been used in KJH; Anburajan's Empirical formula to predict T.BMD and compared with estimated Hologic T.BMD by DXA. The correlation coefficients exhibited high significance. The combined cortical thickness of clavicle and femur shaft of total studied population was strongly correlated with DXA femur T.BMD measurements (r = 0.87, P < 0.01 and r = 0.45, P < 0.01) and it is also having strong correlation with low bone mass group (r = 0.87, P < 0.01 and r = 0.67, P < 0.01) KJH; Anburajan's Empirical formula shows significant correlation with estimated Hologic T.BMD (r = 0.88, P < 0.01) in total studied population. The empirical formula was identified as better tool for predicting osteoporosis in total population and old-aged population with a sensitivity (88.8 and 95.6 %), specificity (89.6 and 90.9 %), positive predictive value (88.8 and 95.6 %) and negative predictive value (89.6 and 90.9 %), respectively. The results suggest that combined cortical thickness of clavicle and femur shaft using radiogrammetric method is significantly correlated with DXA. Moreover, KJH; Anburajan's Empirical formula is useful and better index than other simple radiogrammetry measurements in the evaluation of osteoporosis from the economical and widely available digital radiographs.
Rapid correction of electron microprobe data for multicomponent metallic systems
NASA Technical Reports Server (NTRS)
Gupta, K. P.; Sivakumar, R.
1973-01-01
This paper describes an empirical relation for the correction of electron microprobe data for multicomponent metallic systems. It evaluates the empirical correction parameter, a for each element in a binary alloy system using a modification of Colby's MAGIC III computer program and outlines a simple and quick way of correcting the probe data. This technique has been tested on a number of multicomponent metallic systems and the agreement with the results using theoretical expressions is found to be excellent. Limitations and suitability of this relation are discussed and a model calculation is also presented in the Appendix.
Probabilistic clustering of rainfall condition for landslide triggering
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Luciani, Silvia; Cesare Mondini, Alessandro; Kirschbaum, Dalia; Valigi, Daniela; Guzzetti, Fausto
2013-04-01
Landslides are widespread natural and man made phenomena. They are triggered by earthquakes, rapid snow melting, human activities, but mostly by typhoons and intense or prolonged rainfall precipitations. In Italy mostly they are triggered by intense precipitation. The prediction of landslide triggered by rainfall precipitations over large areas is commonly based on the exploitation of empirical models. Empirical landslide rainfall thresholds are used to identify rainfall conditions for the possible landslide initiation. It's common practice to define rainfall thresholds by assuming a power law lower boundary in the rainfall intensity-duration or cumulative rainfall-duration space above which landslide can occur. The boundary is defined considering rainfall conditions associated to landslide phenomena using heuristic approaches, and doesn't consider rainfall events not causing landslides. Here we present a new fully automatic method to identify the probability of landslide occurrence associated to rainfall conditions characterized by measures of intensity or cumulative rainfall and rainfall duration. The method splits the rainfall events of the past in two groups: a group of events causing landslides and its complementary, then estimate their probabilistic distributions. Next, the probabilistic membership of the new event to one of the two clusters is estimated. The method doesn't assume a priori any threshold model, but simple exploits the real empirical distribution of rainfall events. The approach was applied in the Umbria region, Central Italy, where a catalogue of landslide timing, were obtained through the search of chronicles, blogs and other source of information in the period 2002-2012. The approach was tested using rain gauge measures and satellite rainfall estimates (NASA TRMM-v6), allowing in both cases the identification of the rainfall condition triggering landslides in the region. Compared to the other existing threshold definition methods, the prosed one (i) largely reduces the subjectivity in the choice of the threshold model and in how it is calculated, and (ii) it can be easier set-up in other study areas. The proposed approach can be conveniently integrated in existing early-warning system to improve the accuracy of the estimation of the real landslide occurrence probability associated to rainfall events and its uncertainty.
Low temperature heat capacities and thermodynamic functions described by Debye-Einstein integrals.
Gamsjäger, Ernst; Wiessner, Manfred
2018-01-01
Thermodynamic data of various crystalline solids are assessed from low temperature heat capacity measurements, i.e., from almost absolute zero to 300 K by means of semi-empirical models. Previous studies frequently present fit functions with a large amount of coefficients resulting in almost perfect agreement with experimental data. It is, however, pointed out in this work that special care is required to avoid overfitting. Apart from anomalies like phase transformations, it is likely that data from calorimetric measurements can be fitted by a relatively simple Debye-Einstein integral with sufficient precision. Thereby, reliable values for the heat capacities, standard enthalpies, and standard entropies at T = 298.15 K are obtained. Standard thermodynamic functions of various compounds strongly differing in the number of atoms in the formula unit can be derived from this fitting procedure and are compared to the results of previous fitting procedures. The residuals are of course larger when the Debye-Einstein integral is applied instead of using a high number of fit coefficients or connected splines, but the semi-empiric fit coefficients keep their meaning with respect to physics. It is suggested to use the Debye-Einstein integral fit as a standard method to describe heat capacities in the range between 0 and 300 K so that the derived thermodynamic functions are obtained on the same theory-related semi-empiric basis. Additional fitting is recommended when a precise description for data at ultra-low temperatures (0-20 K) is requested.
Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.
Fennell, John; Baddeley, Roland
2012-10-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Empirical measurement and model validation of infrared spectra of contaminated surfaces
NASA Astrophysics Data System (ADS)
Archer, Sean; Gartley, Michael; Kerekes, John; Cosofret, Bogdon; Giblin, Jay
2015-05-01
Liquid-contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model utilizes radiative transfer modeling to generate synthetic imagery. Within DIRSIG, a micro-scale surface property model (microDIRSIG) was used to calculate numerical bidirectional reflectance distribution functions (BRDF) of geometric surfaces with applied concentrations of liquid contamination. Simple cases where the liquid contamination was well described by optical constants on optically at surfaces were first analytically evaluated by ray tracing and modeled within microDIRSIG. More complex combinations of surface geometry and contaminant application were then incorporated into the micro-scale model. The computed microDIRSIG BRDF outputs were used to describe surface material properties in the encompassing DIRSIG simulation. These DIRSIG generated outputs were validated with empirical measurements obtained from a Design and Prototypes (D&P) Model 102 FTIR spectrometer. Infrared spectra from the synthetic imagery and the empirical measurements were iteratively compared to identify quantitative spectral similarity between the measured data and modeled outputs. Several spectral angles between the predicted and measured emissivities differed by less than 1 degree. Synthetic radiance spectra produced from the microDIRSIG/DIRSIG combination had a RMS error of 0.21-0.81 watts/(m2-sr-μm) when compared to the D&P measurements. Results from this comparison will facilitate improved methods for identifying spectral features and detecting liquid contamination on a variety of natural surfaces.
A commentary on perception-action relationships in spatial display instruments
NASA Technical Reports Server (NTRS)
Shebilske, Wayne L.
1989-01-01
Transfer of information across disciplines is promoted, while basic and applied researchers are cautioned about the danger of assuming simple relationships between stimulus information, perceptual impressions, and performance including pattern recognition and sensorimotor skills. A theoretical and empirical foundation was developed predicting those relationships.
ERIC Educational Resources Information Center
Trostel, Philip; Walker, Ian
2006-01-01
This paper examines the relationship between the incentives to work and to invest in human capital through education in a lifecycle optimizing model. These incentives are shown to be mutually reinforcing in a simple stylized model. This theoretical prediction is investigated empirically using three large micro datasets covering a broad range of…
A simple marriage model for the power-law behaviour in the frequency distributions of family names
NASA Astrophysics Data System (ADS)
Wu, Hao-Yun; Chou, Chung-I.; Tseng, Jie-Jun
2011-01-01
In many countries, the frequency distributions of family names are found to decay as a power law with an exponent ranging from 1.0 to 2.2. In this work, we propose a simple marriage model which can reproduce this power-law behaviour. Our model, based on the evolution of families, consists of the growth of big families and the formation of new families. Preliminary results from the model show that the name distributions are in good agreement with empirical data from Taiwan and Norway.
Herbst, M; Lehmhus, H; Oldenburg, B; Orlowski, C; Ohgke, H
1983-04-01
A simple experimental set for the production and investigation of bacterially contaminated solid-state aerosols with constant concentration is described. The experimental set consists mainly of a fluidized bed-particle generator within a modified chamber for formaldehyde desinfection. The special conditions for the production of a defined concentration of particles and microorganisms are to be found out empirically. In a first application aerosol-sizing of an Andersen sampler is investigated. The findings of Andersen (1) are confirmed with respect to our experimental conditions.
Burris, Scott
2017-03-01
Comparative drug and alcohol policy analysis (CPA) is alive and well, and the emergence of robust alternatives to strict prohibition provides exciting research opportunities. As a multidisciplinary practice, however, CPA faces several methodological challenges. This commentary builds on a recent review of CPA by Ritter et al. (2016) to argue that the practice is hampered by a hazy definition of policy that leads to confusion in the specification and measurement of the phenomena being studied. This problem is aided and abetted by the all-too-common omission of theory from the conceptualization and presentation of research. Drawing on experience from the field of public health law research, this commentary suggests a distinction between empirical and non-empirical CPA, a simple taxonomic model of CPA policy-making, mapping, implementation and evaluation studies, a narrower definition of and rationale for "policy" research, a clear standard for measuring policy, and an expedient approach (and renewed commitment) to using theory explicitly in a multi-disciplinary practice. Strengthening CPA is crucial for the practice to have the impact on policy that good research can. Copyright © 2016 Elsevier B.V. All rights reserved.
Nishino, Jo; Kochi, Yuta; Shigemizu, Daichi; Kato, Mamoru; Ikari, Katsunori; Ochi, Hidenori; Noma, Hisashi; Matsui, Kota; Morizono, Takashi; Boroevich, Keith A.; Tsunoda, Tatsuhiko; Matsui, Shigeyuki
2018-01-01
Genome-wide association studies (GWAS) suggest that the genetic architecture of complex diseases consists of unexpectedly numerous variants with small effect sizes. However, the polygenic architectures of many diseases have not been well characterized due to lack of simple and fast methods for unbiased estimation of the underlying proportion of disease-associated variants and their effect-size distribution. Applying empirical Bayes estimation of semi-parametric hierarchical mixture models to GWAS summary statistics, we confirmed that schizophrenia was extremely polygenic [~40% of independent genome-wide SNPs are risk variants, most within odds ratio (OR = 1.03)], whereas rheumatoid arthritis was less polygenic (~4 to 8% risk variants, significant portion reaching OR = 1.05 to 1.1). For rheumatoid arthritis, stratified estimations revealed that expression quantitative loci in blood explained large genetic variance, and low- and high-frequency derived alleles were prone to be risk and protective, respectively, suggesting a predominance of deleterious-risk and advantageous-protective mutations. Despite genetic correlation, effect-size distributions for schizophrenia and bipolar disorder differed across allele frequency. These analyses distinguished disease polygenic architectures and provided clues for etiological differences in complex diseases. PMID:29740473
Inferring the microscopic surface energy of protein-protein interfaces from mutation data.
Moal, Iain H; Dapkūnas, Justas; Fernández-Recio, Juan
2015-04-01
Mutations at protein-protein recognition sites alter binding strength by altering the chemical nature of the interacting surfaces. We present a simple surface energy model, parameterized with empirical ΔΔG values, yielding mean energies of -48 cal mol(-1) Å(-2) for interactions between hydrophobic surfaces, -51 to -80 cal mol(-1) Å(-2) for surfaces of complementary charge, and 66-83 cal mol(-1) Å(-2) for electrostatically repelling surfaces, relative to the aqueous phase. This places the mean energy of hydrophobic surface burial at -24 cal mol(-1) Å(-2) . Despite neglecting configurational entropy and intramolecular changes, the model correlates with empirical binding free energies of a functionally diverse set of rigid-body interactions (r = 0.66). When used to rerank docking poses, it can place near-native solutions in the top 10 for 37% of the complexes evaluated, and 82% in the top 100. The method shows that hydrophobic burial is the driving force for protein association, accounting for 50-95% of the cohesive energy. The model is available open-source from http://life.bsc.es/pid/web/surface_energy/ and via the CCharpPPI web server http://life.bsc.es/pid/ccharppi/. © 2015 Wiley Periodicals, Inc.
Microbiology of Urinary Tract Infections in Gaborone, Botswana
Renuart, Andrew J.; Goldfarb, David M.; Mokomane, Margaret; Tawanana, Ephraim O.; Narasimhamurthy, Mohan; Steenhoff, Andrew P.; Silverman, Jonathan A.
2013-01-01
Objective The microbiology and epidemiology of UTI pathogens are largely unknown in Botswana, a high prevalence HIV setting. Using laboratory data from the largest referral hospital and a private hospital, we describe the major pathogens causing UTI and their antimicrobial resistance patterns. Methods This retrospective study examined antimicrobial susceptibility data for urine samples collected at Princess Marina Hospital (PMH), Bokamoso Private Hospital (BPH), or one of their affiliated outpatient clinics. A urine sample was included in our dataset if it demonstrated pure growth of a single organism and accompanying antimicrobial susceptibility and subject demographic data were available. Results A total of 744 samples were included. Greater than 10% resistance was observed for amoxicillin, co-trimoxazole, amoxicillin-clavulanate, and ciprofloxacin. Resistance of E. coli isolates to ampicillin and co-trimoxazole was greater than 60% in all settings. HIV status did not significantly impact the microbiology of UTIs, but did impact antimicrobial resistance to co-trimoxazole. Conclusions Data suggests that antimicrobial resistance has already emerged to most oral antibiotics, making empiric management of outpatient UTIs challenging. Ampicillin, co-trimoxazole, and ciprofloxacin should not be used as empiric treatment for UTI in this context. Nitrofurantoin could be used for simple cystitis; aminoglycosides for uncomplicated UTI in inpatients. PMID:23469239
Crawford, E D; Batuello, J T; Snow, P; Gamito, E J; McLeod, D G; Partin, A W; Stone, N; Montie, J; Stock, R; Lynch, J; Brandt, J
2000-05-01
The current study assesses artificial intelligence methods to identify prostate carcinoma patients at low risk for lymph node spread. If patients can be assigned accurately to a low risk group, unnecessary lymph node dissections can be avoided, thereby reducing morbidity and costs. A rule-derivation technology for simple decision-tree analysis was trained and validated using patient data from a large database (4,133 patients) to derive low risk cutoff values for Gleason sum and prostate specific antigen (PSA) level. An empiric analysis was used to derive a low risk cutoff value for clinical TNM stage. These cutoff values then were applied to 2 additional, smaller databases (227 and 330 patients, respectively) from separate institutions. The decision-tree protocol derived cutoff values of < or = 6 for Gleason sum and < or = 10.6 ng/mL for PSA. The empiric analysis yielded a clinical TNM stage low risk cutoff value of < or = T2a. When these cutoff values were applied to the larger database, 44% of patients were classified as being at low risk for lymph node metastases (0.8% false-negative rate). When the same cutoff values were applied to the smaller databases, between 11 and 43% of patients were classified as low risk with a false-negative rate of between 0.0 and 0.7%. The results of the current study indicate that a population of prostate carcinoma patients at low risk for lymph node metastases can be identified accurately using a simple decision algorithm that considers preoperative PSA, Gleason sum, and clinical TNM stage. The risk of lymph node metastases in these patients is < or = 1%; therefore, pelvic lymph node dissection may be avoided safely. The implications of these findings in surgical and nonsurgical treatment are significant.
Hammarström, Anne; Annandale, Ellen
2012-01-01
Background At the same time as there is increasing awareness in medicine of the risks of exaggerating differences between men and women, there is a growing professional movement of ‘gender-specific medicine’ which is directed towards analysing ‘sex’ and ‘gender’ differences. The aim of this article is to empirically explore how the concepts of ‘sex’ and ‘gender’ are used in the new field of ‘gender-specific medicine’, as reflected in two medical journals which are foundational to this relatively new field. Method and Principal Findings The data consist of all articles from the first issue of each journal in 2004 and an issue published three years later (n = 43). In addition, all editorials over this period were included (n = 61). Quantitative and qualitative content analyses were undertaken by the authors. Less than half of the 104 papers used the concepts of ‘sex’ and ‘gender’. Less than 1 in 10 papers attempted any definition of the concepts. Overall, the given definitions were simple, unspecific and created dualisms between men and women. Almost all papers which used the two concepts did so interchangeably, with any possible interplay between ‘sex’ and gender’ referred to only in six of the papers. Conclusion The use of the concepts of ‘sex’ and gender’ in ‘gender-specific medicine’ is conceptually muddled. The simple, dualistic and individualised use of these concepts increases the risk of essentialism and reductivist thinking. It therefore highlights the need to clarify the use of the terms ‘sex’ and ‘gender’ in medical research and to develop more effective ways of conceptualising the interplay between ‘sex’ and ‘gender’ in relation to different diseases. PMID:22529907
Defazio, P; Gamallo, P; González, M; Petrongolo, C
2010-09-16
Four reactions NH(a1Delta) + H′(2S) are investigated by the quantum mechanical real wavepacket method, taking into account nonadiabatic Renner-Teller (RT) and rovibronic Coriolis couplings between the involved states. We consider depletion (d) to N(2D) + H2(X1Sigmag+), exchange (e) to NH′(a1Delta) + H(2S), quenching (q) to NH(X3Sigma-) + H′(2S), and exchange-quenching (eq) to NH′(X3Sigma-) + H(2S). We extend our RT theory to a general AB + C collision using a geometry-dependent but very simple and empirical RT matrix element. Reaction probabilities, cross sections, and rate constants are presented, and RT results are compared with Born-Oppenheimer (BO), experimental, and semiclassical data. The nonadiabatic couplings open two new channels, (q) and (eq), and increase the (d) and (e) reactivity with respect to the BO one, when NH(a1Delta) is rotationally excited. In this case, the quantum cross sections are larger than the semiclassical ones at low collision energies. The calculated rate constants at 300 K are k(d) = 3.06, k(e) = 3.32, k(q) = 1.44, and k(eq) = 1.70 in 10(-11) cm3 s(-1) compared with the measured values k(d) = (3.2 =/- 1.7), k(q + eq) = (1.7 +/- 0.3), and k(total) = (4.8 +/- 1.7). The theoretical depletion rate is thus in good agreement with the experimental value, but the quenching and total rates are overestimated, because the present RT couplings are too large. This discrepancy is probably due to our simple and empirical RT matrix element.
NASA Astrophysics Data System (ADS)
Grubov, V. V.; Runnova, A. E.; Hramov, A. E.
2018-05-01
A new method for adaptive filtration of experimental EEG signals in humans and for removal of different physiological artifacts has been proposed. The algorithm of the method includes empirical mode decomposition of EEG, determination of the number of empirical modes that are considered, analysis of the empirical modes and search for modes that contains artifacts, removal of these modes, and reconstruction of the EEG signal. The method was tested on experimental human EEG signals and demonstrated high efficiency in the removal of different types of physiological EEG artifacts.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
Extensions to the integral line-beam method for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.
1995-08-01
A computationally simple method for estimating gamma-ray skyshine dose rates has been developed on the basis of the line-beam response function. Both Monte Carlo and pointkernel calculations that account for both annihilation and bremsstrahlung were used in the generation of line beam response functions (LBRF) for gamma-ray energies between 10 and 100 MeV. The LBRF is approximated by a three-parameter formula. By combining results with those obtained in an earlier study for gamma energies below 10 MeV, LBRF values are readily and accurately evaluated for source energies between 0.02 and 100 MeV, for source-to-detector distances between 1 and 3000 m,more » and beam angles as great as 180 degrees. Tables of the parameters for the approximate LBRF are presented. The new response functions are then applied to three simple skyshine geometries, an open silo geometry, an infinite wall, and a rectangular four-wall building. Results are compared to those of previous calculations and to benchmark measurements. A new approach is introduced to account for overhead shielding of the skyshine source and compared to the simplistic exponential-attenuation method used in earlier studies. The effect of the air-ground interface, usually neglected in gamma skyshine studies, is also examined and an empirical correction factor is introduced. Finally, a revised code based on the improved LBRF approximations and the treatment of the overhead shielding is presented, and results shown for several benchmark problems.« less
Infiltration is important to modeling the overland transport of microorganisms in environmental waters. In watershed- and hillslope scale-models, infiltration is commonly described by simple equations relating infiltration rate to soil saturated conductivity and by empirical para...
Robert L. Geimer; Robert A. Follensbee; Alfred W. Christiansen; James A. Koutsky; George E. Myers
1990-01-01
Currently, thermosetting adhesives are characterized by physical andchemical features such as viscosity, solids content, pH, and molecular distribution, and their reaction in simple gel tests. Synthesis of a new resin for a particular application is usually accompanied by a series of empirical laboratory and plant trials. The purpose of the research outlined in this...
Decision Making and Confidence Given Uncertain Advice
ERIC Educational Resources Information Center
Lee, Michael D.; Dry, Matthew J.
2006-01-01
We study human decision making in a simple forced-choice task that manipulates the frequency and accuracy of available information. Empirically, we find that people make decisions consistent with the advice provided, but that their subjective confidence in their decisions shows 2 interesting properties. First, people's confidence does not depend…
"Molecular Clock" Analogs: A Relative Rates Exercise
ERIC Educational Resources Information Center
Wares, John P.
2008-01-01
Although molecular clock theory is a commonly discussed facet of evolutionary biology, undergraduates are rarely presented with the underlying information of how this theory is examined relative to empirical data. Here a simple contextual exercise is presented that not only provides insight into molecular clocks, but is also a useful exercise for…
Using Signs to Facilitate Vocabulary in Children with Language Delays
ERIC Educational Resources Information Center
Lederer, Susan Hendler; Battaglia, Dana
2015-01-01
The purpose of this article is to explore recommended practices in choosing and using key word signs (i.e., simple single-word gestures for communication) to facilitate first spoken words in hearing children with language delays. Developmental, theoretical, and empirical supports for this practice are discussed. Practical recommendations for…
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
A Positive Stigma for Child Labor?
ERIC Educational Resources Information Center
Patrinos, Harry Anthony; Shafiq, M. Najeeb
2008-01-01
We introduce a simple empirical model that assumes a positive stigma (or norm) towards child labor that is common in some developing countries. We then illustrate our positive stigma model using data from Guatemala. Controlling for several child- and household-level characteristics, we use two instruments for measuring stigma: a child's indigenous…
Three Essays on Estimating Causal Treatment Effects
ERIC Educational Resources Information Center
Deutsch, Jonah
2013-01-01
This dissertation is composed of three distinct chapters, each of which addresses issues of estimating treatment effects. The first chapter empirically tests the Value-Added (VA) model using school lotteries. The second chapter, co-authored with Michael Wood, considers properties of inverse probability weighting (IPW) in simple treatment effect…
Synthesis and Analysis of Copper Hydroxy Double Salts
ERIC Educational Resources Information Center
Brigandi, Laura M.; Leber, Phyllis A.; Yoder, Claude H.
2005-01-01
A project involving the synthesis of several naturally occurring copper double salts using simple aqueous conditions is reported. The ions present in the compound are analyzed using colorimetric, gravimetric, and gas-analysis techniques appropriate for the first-year laboratory and from the percent composition, the empirical formula of each…
SIMPLE EMPIRICAL RISK RELATIONSHIPS BETWEEN FISH ASSEMBLAGES, HABITAT AND WATER QUALITY IN OHIO
To assess the condition of its streams, fish, habitat and water quality data were collected from 1980 to 1998 by the Ohio Environmental Protection Agency. These data were sorted into 190 time/locations by basin, river mile and year. Eighteen fish community variables and 24 habi...
Field Theory in Cultural Capital Studies of Educational Attainment
ERIC Educational Resources Information Center
Krarup, Troels; Munk, Martin D.
2016-01-01
This article argues that there is a double problem in international research in cultural capital and educational attainment: an empirical problem, since few new insights have been gained within recent years; and a theoretical problem, since cultural capital is seen as a simple hypothesis about certain isolated individual resources, disregarding…
Pieterse, Arwen H; de Vries, Marieke
2013-09-01
Increasingly, patient decision aids and values clarification methods (VCMs) are being developed to support patients in making preference-sensitive health-care decisions. Many VCMs encourage extensive deliberation about options, without solid theoretical or empirical evidence showing that deliberation is advantageous. Research suggests that simple, fast and frugal heuristic decision strategies sometimes result in better judgments and decisions. Durand et al. have developed two fast and frugal heuristic-based VCMs. To critically analyse the suitability of the 'take the best' (TTB) and 'tallying' fast and frugal heuristics in the context of patient decision making. Analysis of the structural similarities between the environments in which the TTB and tallying heuristics have been proven successful and the context of patient decision making and of the potential of these heuristic decision processes to support patient decision making. The specific nature of patient preference-sensitive decision making does not seem to resemble environments in which the TTB and tallying heuristics have proven successful. Encouraging patients to consider less rather than more relevant information potentially even deteriorates their values clarification process. Values clarification methods promoting the use of more intuitive decision strategies may sometimes be more effective. Nevertheless, we strongly recommend further theoretical thinking about the expected value of such heuristics and of other more intuitive decision strategies in this context, as well as empirical assessments of the mechanisms by which inducing such decision strategies may impact the quality and outcome of values clarification. © 2011 John Wiley & Sons Ltd.
Pieterse, Arwen H.; de Vries, Marieke
2011-01-01
Abstract Background Increasingly, patient decision aids and values clarification methods (VCMs) are being developed to support patients in making preference‐sensitive health‐care decisions. Many VCMs encourage extensive deliberation about options, without solid theoretical or empirical evidence showing that deliberation is advantageous. Research suggests that simple, fast and frugal heuristic decision strategies sometimes result in better judgments and decisions. Durand et al. have developed two fast and frugal heuristic‐based VCMs. Objective To critically analyse the suitability of the ‘take the best’ (TTB) and ‘tallying’ fast and frugal heuristics in the context of patient decision making. Strategy Analysis of the structural similarities between the environments in which the TTB and tallying heuristics have been proven successful and the context of patient decision making and of the potential of these heuristic decision processes to support patient decision making. Conclusion The specific nature of patient preference‐sensitive decision making does not seem to resemble environments in which the TTB and tallying heuristics have proven successful. Encouraging patients to consider less rather than more relevant information potentially even deteriorates their values clarification process. Values clarification methods promoting the use of more intuitive decision strategies may sometimes be more effective. Nevertheless, we strongly recommend further theoretical thinking about the expected value of such heuristics and of other more intuitive decision strategies in this context, as well as empirical assessments of the mechanisms by which inducing such decision strategies may impact the quality and outcome of values clarification. PMID:21902770
Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings
NASA Astrophysics Data System (ADS)
Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui
2018-02-01
Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.
Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses
Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy
2015-01-01
Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579
Comparison of ACCENT 2000 Shuttle Plume Data with SIMPLE Model Predictions
NASA Astrophysics Data System (ADS)
Swaminathan, P. K.; Taylor, J. C.; Ross, M. N.; Zittel, P. F.; Lloyd, S. A.
2001-12-01
The JHU/APL Stratospheric IMpact of PLume Effluents (SIMPLE)model was employed to analyze the trace species in situ composition data collected during the ACCENT 2000 intercepts of the space shuttle Space Transportation Launch System (STS) rocket plume as a function of time and radial location within the cold plume. The SIMPLE model is initialized using predictions for species depositions calculated using an afterburning model based on standard TDK/SPP nozzle and SPF plume flowfield codes with an expanded chemical kinetic scheme. The time dependent ambient stratospheric chemistry is fully coupled to the plume species evolution whose transport is based on empirically derived diffusion. Model/data comparisons are encouraging through capturing observed local ozone recovery times as well as overall morphology of chlorine chemistry.
NASA Astrophysics Data System (ADS)
Xu, M., III; Liu, X.
2017-12-01
In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J; Templeton, D C; Harris, D B
The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combinedmore » with conventional methods significantly improves the network detection ability in an efficient matter.« less
NASA Astrophysics Data System (ADS)
Trusiak, M.; Patorski, K.; Tkaczyk, T.
2014-12-01
We propose a fast, simple and experimentally robust method for reconstructing background-rejected optically-sectioned microscopic images using two-shot structured illumination approach. Innovative data demodulation technique requires two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement value is not critical. Upon subtraction of the two frames the input pattern with increased grid modulation is computed. The proposed demodulation procedure comprises: (1) two-dimensional data processing based on the enhanced, fast empirical mode decomposition (EFEMD) method for the object spatial frequency selection (noise reduction and bias term removal), and (2) calculating high contrast optically-sectioned image using the two-dimensional spiral Hilbert transform (HS). The proposed algorithm effectiveness is compared with the results obtained for the same input data using conventional structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. In comparison with the conventional three-frame SIM technique we need one frame less and no stringent requirement on the exact phase-shift between recorded frames is imposed. The HiLo algorithm outcome is strongly dependent on the set of parameters chosen manually by the operator (cut-off frequencies for low-pass and high-pass filtering and η parameter value for optically-sectioned image reconstruction) whereas the proposed method is parameter-free. Moreover very short processing time required to efficiently demodulate the input pattern predestines proposed method for real-time in-vivo studies. Current implementation completes full processing in 0.25s using medium class PC (Inter i7 2,1 GHz processor and 8 GB RAM). Simple modification employed to extract only first two BIMFs with fixed filter window size results in reducing the computing time to 0.11s (8 frames/s).
Advanced data assimilation in strongly nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Miller, Robert N.; Ghil, Michael; Gauthiez, Francois
1994-01-01
Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence. Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided. Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method--based on an empirical statistical model derived from a Monte Carlo simulation--is formulated, and shown to work very well. Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.
2014-01-01
Background The DerSimonian and Laird approach (DL) is widely used for random effects meta-analysis, but this often results in inappropriate type I error rates. The method described by Hartung, Knapp, Sidik and Jonkman (HKSJ) is known to perform better when trials of similar size are combined. However evidence in realistic situations, where one trial might be much larger than the other trials, is lacking. We aimed to evaluate the relative performance of the DL and HKSJ methods when studies of different sizes are combined and to develop a simple method to convert DL results to HKSJ results. Methods We evaluated the performance of the HKSJ versus DL approach in simulated meta-analyses of 2–20 trials with varying sample sizes and between-study heterogeneity, and allowing trials to have various sizes, e.g. 25% of the trials being 10-times larger than the smaller trials. We also compared the number of “positive” (statistically significant at p < 0.05) findings using empirical data of recent meta-analyses with > = 3 studies of interventions from the Cochrane Database of Systematic Reviews. Results The simulations showed that the HKSJ method consistently resulted in more adequate error rates than the DL method. When the significance level was 5%, the HKSJ error rates at most doubled, whereas for DL they could be over 30%. DL, and, far less so, HKSJ had more inflated error rates when the combined studies had unequal sizes and between-study heterogeneity. The empirical data from 689 meta-analyses showed that 25.1% of the significant findings for the DL method were non-significant with the HKSJ method. DL results can be easily converted into HKSJ results. Conclusions Our simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses. Even with the HKSJ method, extra caution is needed when there are = <5 studies of very unequal sizes. PMID:24548571
NASA Astrophysics Data System (ADS)
Steyn, Gideon; Vermeulen, Christiaan
2018-05-01
An experiment was designed to study the effect of the jet direction on convective heat-transfer coefficients in single-jet gas cooling of a small heated surface, such as typically induced by an accelerated ion beam on a thin foil or specimen. The hot spot was provided using a small electrically heated plate. Heat-transfer calculations were performed using simple empirical methods based on dimensional analysis as well as by means of an advanced computational fluid dynamics (CFD) code. The results provide an explanation for the observed turbulent cooling of a double-foil, Havar beam window with fast-flowing helium, located on a target station for radionuclide production with a 66 MeV proton beam at a cyclotron facility.
NASA Technical Reports Server (NTRS)
Bates, Kevin R.; Scuseria, Gustavo E.
1997-01-01
Multi-layered round carbon particles (onions) containing tens to hundreds of thousands of atoms form during electron irradiation of graphite carbon. However, theoretical models of large icosahedral fullerenes predict highly faceted shapes for molecules with more than a few hundred atoms. This discrepancy in shape may be explained by the presence of defects during the formation of carbon onions. Here, we use the semi-empirical tight-binding method for carbon to simulate the incorporation of pentagon-heptagon defects on to the surface of large icosahedral fullerenes. We show a simple mechanism that results in energetically competitive derivative structures and a global change in molecular shape from faceted to round. Our results provide a plausible explanation of the apparent discrepancy between experimental observations of round buckyonions and theoretical predictions of faceted icosahedral fullerenes.
Empirical analysis on future-cash arbitrage risk with portfolio VaR
NASA Astrophysics Data System (ADS)
Chen, Rongda; Li, Cong; Wang, Weijin; Wang, Ze
2014-03-01
This paper constructs the positive arbitrage position by alternating the spot index with Chinese Exchange Traded Fund (ETF) portfolio and estimating the arbitrage-free interval of futures with the latest trade data. Then, an improved Delta-normal method was used, which replaces the simple linear correlation coefficient with tail dependence correlation coefficient, to measure VaR (Value-at-risk) of the arbitrage position. Analysis of VaR implies that the risk of future-cash arbitrage is less than that of investing completely in either futures or spot market. Then according to the compositional VaR and the marginal VaR, we should increase the futures position and decrease the spot position appropriately to minimize the VaR, which can minimize risk subject to certain revenues.
A Simple and Accurate Rate-Driven Infiltration Model
NASA Astrophysics Data System (ADS)
Cui, G.; Zhu, J.
2017-12-01
In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.
Determination of performance of non-ideal aluminized explosives.
Keshavarz, Mohammad Hossein; Mofrad, Reza Teimuri; Poor, Karim Esmail; Shokrollahi, Arash; Zali, Abbas; Yousefi, Mohammad Hassan
2006-09-01
Non-ideal explosives can have Chapman-Jouguet (C-J) detonation pressure significantly different from those expected from existing thermodynamic computer codes, which usually allows finding the parameters of ideal detonation of individual high explosives with good accuracy. A simple method is introduced by which detonation pressure of non-ideal aluminized explosives with general formula C(a)H(b)N(c)O(d)Al(e) can be predicted only from a, b, c, d and e at any loading density without using any assumed detonation products and experimental data. Calculated detonation pressures show good agreement with experimental values with respect to computed results obtained by complicated computer code. It is shown here how loading density and atomic composition can be integrated into an empirical formula for predicting detonation pressure of proposed aluminized explosives.
Determination of 15N/14N and 13C/12C in Solid and Aqueous Cyanides
Johnson, C.A.
1996-01-01
The stable isotopic compositions of nitrogen and carbon in cyanide compounds can be determined by combusting aliquots in sealed tubes to form N2 gas and CO2 gas and analyzing the gases by mass spectrometry. Free cyanide (CN-aq + HCNaq) in simple solutions can also be analyzed by first precipitating the cyanide as copper(II) ferrocyanide and then combusting the precipitate. Reproducibility is ??0.5??? or better for both ??15N and ??13C. If empirical corrections are made on the basis of carbon yields, the reproducibility of ??13C can be improved to ??0.2???. The analytical methods described herein are sufficiently accurate and precise to apply stable isotope techniques to problems of cyanide degradation in natural waters and industrial process solutions.
Approximating Long-Term Statistics Early in the Global Precipitation Measurement Era
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Kirschbaum, Dalia B.; Huffman, George J.; Adler, Robert F.
2017-01-01
Long-term precipitation records are vital to many applications, especially the study of extreme events. The Tropical Rainfall Measuring Mission (TRMM) has served this need, but TRMMs successor mission, Global Precipitation Measurement (GPM), does not yet provide a long-term record. Quantile mapping, the conversion of values across paired empirical distributions, offers a simple, established means to approximate such long-term statistics, but only within appropriately defined domains. This method was applied to a case study in Central America, demonstrating that quantile mapping between TRMM and GPM data maintains the performance of a real-time landslide model. Use of quantile mapping could bring the benefits of the latest satellite-based precipitation dataset to existing user communities such as those for hazard assessment, crop forecasting, numerical weather prediction, and disease tracking.
NASA Astrophysics Data System (ADS)
Liu, Ruipeng; Di Matteo, T.; Lux, Thomas
2007-09-01
In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.
Newell, Ben R
2005-01-01
The appeal of simple algorithms that take account of both the constraints of human cognitive capacity and the structure of environments has been an enduring theme in cognitive science. A novel version of such a boundedly rational perspective views the mind as containing an 'adaptive toolbox' of specialized cognitive heuristics suited to different problems. Although intuitively appealing, when this version was proposed, empirical evidence for the use of such heuristics was scant. I argue that in the light of empirical studies carried out since then, it is time this 'vision of rationality' was revised. An alternative view based on integrative models rather than collections of heuristics is proposed.
Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean; Cheung, Kenneth C
2017-03-01
We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures.
Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures
Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean
2017-01-01
Abstract We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures. PMID:28289574
Molecular interactions in nanocellulose assembly
NASA Astrophysics Data System (ADS)
Nishiyama, Yoshiharu
2017-12-01
The contribution of hydrogen bonds and the London dispersion force in the cohesion of cellulose is discussed in the light of the structure, spectroscopic data, empirical molecular-modelling parameters and thermodynamics data of analogue molecules. The hydrogen bond of cellulose is mainly electrostatic, and the stabilization energy in cellulose for each hydrogen bond is estimated to be between 17 and 30 kJ mol-1. On average, hydroxyl groups of cellulose form hydrogen bonds comparable to those of other simple alcohols. The London dispersion interaction may be estimated from empirical attraction terms in molecular modelling by simple integration over all components. Although this interaction extends to relatively large distances in colloidal systems, the short-range interaction is dominant for the cohesion of cellulose and is equivalent to a compression of 3 GPa. Trends of heat of vaporization of alkyl alcohols and alkanes suggests a stabilization by such hydroxyl group hydrogen bonding to be of the order of 24 kJ mol-1, whereas the London dispersion force contributes about 0.41 kJ mol-1 Da-1. The simple arithmetic sum of the energy is consistent with the experimental enthalpy of sublimation of small sugars, where the main part of the cohesive energy comes from hydrogen bonds. For cellulose, because of the reduced number of hydroxyl groups, the London dispersion force provides the main contribution to intermolecular cohesion. This article is part of a discussion meeting issue `New horizons for cellulose nanotechnology'.
Power-Laws and Scaling in Finance: Empirical Evidence and Simple Models
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
We discuss several models that may explain the origin of power-law distributions and power-law correlations in financial time series. From an empirical point of view, the exponents describing the tails of the price increments distribution and the decay of the volatility correlations are rather robust and suggest universality. However, many of the models that appear naturally (for example, to account for the distribution of wealth) contain some multiplicative noise, which generically leads to non universal exponents. Recent progress in the empirical study of the volatility suggests that the volatility results from some sort of multiplicative cascade. A convincing `microscopic' (i.e. trader based) model that explains this observation is however not yet available. We discuss a rather generic mechanism for long-ranged volatility correlations based on the idea that agents constantly switch between active and inactive strategies depending on their relative performance.
Using Empirical Mode Decomposition to process Marine Magnetotelluric Data
NASA Astrophysics Data System (ADS)
Chen, J.; Jegen, M. D.; Heincke, B. H.; Moorkamp, M.
2014-12-01
The magnetotelluric (MT) data always exhibits nonstationarities due to variations of source mechanisms causing MT variations on different time and spatial scales. An additional non-stationary component is introduced through noise, which is particularly pronounced in marine MT data through motion induced noise caused by time-varying wave motion and currents. We present a new heuristic method for dealing with the non-stationarity of MT time series based on Empirical Mode Decomposition (EMD). The EMD method is used in combination with the derived instantaneous spectra to determine impedance estimates. The procedure is tested on synthetic and field MT data. In synthetic tests the reliability of impedance estimates from EMD-based method is compared to the synthetic responses of a 1D layered model. To examine how estimates are affected by noise, stochastic stationary and non-stationary noise are added on the time series. Comparisons reveal that estimates by the EMD-based method are generally more stable than those by simple Fourier analysis. Furthermore, the results are compared to those derived by a commonly used Fourier-based MT data processing software (BIRRP), which incorporates additional sophisticated robust estimations to deal with noise issues. It is revealed that the results from both methods are already comparable, even though no robust estimate procedures are implemented in the EMD approach at present stage. The processing scheme is then applied to marine MT field data. Testing is performed on short, relatively quiet segments of several data sets, as well as on long segments of data with many non-stationary noise packages. Compared to BIRRP, the new method gives comparable or better impedance estimates, furthermore, the estimates are extended to lower frequencies and less noise biased estimates with smaller error bars are obtained at high frequencies. The new processing methodology represents an important step towards deriving a better resolved Earth model to greater depth underneath the seafloor.
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
A Simple Syllogism-Solving Test: Empirical Findings and Implications for "g" Research
ERIC Educational Resources Information Center
Shikishima, Chizuru; Yamagata, Shinji; Hiraishi, Kai; Sugimoto, Yutaro; Murayama, Kou; Ando, Juko
2011-01-01
It has been reported that the ability to solve syllogisms is highly "g"-loaded. In the present study, using a self-administered shortened version of a syllogism-solving test, the "BAROCO Short," we examined whether robust findings generated by previous research regarding IQ scores were also applicable to "BAROCO…
Modeling Smoke Plume-Rise and Dispersion from Southern United States Prescribed Burns with Daysmoke
G L Achtemeier; S L Goodrick; Y Liu; F Garcia-Menendez; Y Hu; M. Odman
2011-01-01
We present Daysmoke, an empirical-statistical plume rise and dispersion model for simulating smoke from prescribed burns. Prescribed fires are characterized by complex plume structure including multiple-core updrafts which makes modeling with simple plume models difficult. Daysmoke accounts for plume structure in a three-dimensional veering/sheering atmospheric...
Social Trust and the Growth of Schooling
ERIC Educational Resources Information Center
Bjornskov, Christian
2009-01-01
The paper develops a simple model to examine how social trust might affect the growth of schooling through lowering transaction costs associated with employing educated individuals. In a sample of 52 countries, the paper thereafter provides empirical evidence that trust has led to faster growth of schooling in the period 1960-2000. The findings…
The Polarization of Light and Malus' Law Using Smartphones
ERIC Educational Resources Information Center
Monteiro, Martín; Stari, Cecilia; Cabeza, Cecilia; Marti, Arturo C.
2017-01-01
Originally an empirical law, nowadays Malus' law is seen as a key experiment to demonstrate the transverse nature of electromagnetic waves, as well as the intrinsic connection between optics and electromagnetism. In this work, a simple and inexpensive setup is proposed to quantitatively verify the nature of polarized light. A flat computer screen…
Constrained range expansion and climate change assessments
Yohay Carmel; Curtis H. Flather
2006-01-01
Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...
Another View: In Defense of Vigor over Rigor in Classroom Demonstrations
ERIC Educational Resources Information Center
Dunn, Dana S.
2008-01-01
Scholarship of teaching and learning (SoTL) demands greater empirical rigor on the part of authors and the editorial process than ever before. Although admirable and important, I worry that this increasing rigor will limit opportunities and outlets for a form of pedagogical vigor--the publication of simple, experiential, but empirically…
A Cognitive Framework in Teaching English Simple Present
ERIC Educational Resources Information Center
Tian, Cong
2015-01-01
A Cognitive Grammar (CG) analysis of linguistic constructions has been claimed to be beneficial to second language teaching. However, little empirical research has been done to support this claim. In this study, two intact classes of Chinese senior high school students were given a 45-minute review lesson on the usages of the English simple…
NASA Astrophysics Data System (ADS)
Reid, Lucas; Kittlaus, Steffen; Scherer, Ulrike
2015-04-01
For large areas without highly detailed data the empirical Universal Soil Loss Equation (USLE) is widely used to quantify soil loss. The problem though is usually the quantification of actual sediment influx into the rivers. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). But it gets difficult with spatially lumped approaches in large catchment areas where the geographical properties have a wide variance. In this study we developed a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in the catchments. The sediment delivery ratio was determined using an empirical approach considering the slope, morphology and land use properties along the flow path as an estimation of travel time of the eroded particles. The model was tested against suspended solids measurements in selected sub-basins of the River Inn catchment area in Germany and Austria, ranging from the high alpine south to the Molasse basin in the northern part.
Rates of profit as correlated sums of random variables
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2013-10-01
Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.
Khan, Mohammad Niyaz; Yusof, Nor Saadah Mohd; Razak, Norazizah Abdul
2013-01-01
The semi-empirical spectrophotometric (SESp) method, for the indirect determination of ion exchange constants (K(X)(Br)) of ion exchange processes occurring between counterions (X⁻ and Br⁻) at the cationic micellar surface, is described in this article. The method uses an anionic spectrophotometric probe molecule, N-(2-methoxyphenyl)phthalamate ion (1⁻), which measures the effects of varying concentrations of inert inorganic or organic salt (Na(v)X, v = 1, 2) on absorbance, (A(ob)) at 310 nm, of samples containing constant concentrations of 1⁻, NaOH and cationic micelles. The observed data fit satisfactorily to an empirical equation which gives the values of two empirical constants. These empirical constants lead to the determination of K(X)(Br) (= K(X)/K(Br) with K(X) and K(Br) representing cationic micellar binding constants of counterions X and Br⁻). This method gives values of K(X)(Br) for both moderately hydrophobic and hydrophilic X⁻. The values of K(X)(Br), obtained by using this method, are comparable with the corresponding values of K(X)(Br), obtained by the use of semi-empirical kinetic (SEK) method, for different moderately hydrophobic X. The values of K(X)(Br) for X = Cl⁻ and 2,6-Cl₂C6H₃CO₂⁻, obtained by the use of SESp and SEK methods, are similar to those obtained by the use of other different conventional methods.
Exploratory Causal Analysis in Bivariate Time Series Data
NASA Astrophysics Data System (ADS)
McCracken, James M.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data sets, but little research exists of how these tools compare to each other in practice. This work introduces and defines exploratory causal analysis (ECA) to address this issue along with the concept of data causality in the taxonomy of causal studies introduced in this work. The motivation is to provide a framework for exploring potential causal structures in time series data sets. ECA is used on several synthetic and empirical data sets, and it is found that all of the tested time series causality tools agree with each other (and intuitive notions of causality) for many simple systems but can provide conflicting causal inferences for more complicated systems. It is proposed that such disagreements between different time series causality tools during ECA might provide deeper insight into the data than could be found otherwise.
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
NASA Astrophysics Data System (ADS)
Kanki, R.; Uchiyama, Y.; Miyazaki, D.; Takano, A.; Miyazawa, Y.; Yamazaki, H.
2014-12-01
Mesoscale oceanic structure and variability are required to be reproduced as accurately as possible in realistic regional ocean modeling. Uchiyama et al. (2012) demonstrated with a submesoscale eddy-resolving JCOPE2-ROMS downscaling oceanic modeling system that the mesoscale reproducibility of the Kuroshio meandering along Japan is significantly improved by introducing a simple restoration to data which we call "TS nudging" (a.k.a. robust diagnosis) where the prognostic temperature and salinity fields are weakly nudged four-dimensionally towards the assimilative JCOPE2 reanalysis (Miyazawa et al., 2009). However, there is not always a reliable reanalysis for oceanic downscaling in an arbitrary region and at an arbitrary time, and therefore alternative dataset should be prepared. Takano et al. (2009) proposed an empirical method to estimate mesoscale 3-D thermal structure from the near real-time AVISO altimetry data along with the ARGO float data based on the two-layer model of Goni et al. (1996). In the present study, we consider the TS data derived from this method as a candidate. We thus conduct a synoptic forward modeling of the Kuroshio using the JCOPE2-ROMS downscaling system to explore potential utility of this empirical TS dataset (hereinafter TUM-TS) by carrying out two runs with the T-S nudging towards 1) the JCOPE2-TS and 2) TUM-TS fields. An example of the comparison between the two ROMS test runs is shown in the attached figure showing the annually averaged surface EKE. Both of TUM-TS and JCOPE2-TS are found to help reproducing the mesoscale variance of the Koroshio and its extension as well as its mean paths, surface KE and EKE reasonably well. Therefore, the AVISO-ARGO derived empirical 3-D TS estimation is potentially exploitable for the dataset to conduct the T-S nudging to reproduce mesoscale oceanic structure.
A computational approach to compare regression modelling strategies in prediction research.
Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H
2016-08-25
It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.
What can we learn about dispersion from the conformer surface of n-pentane?
Martin, Jan M L
2013-04-11
In earlier work [Gruzman, D. ; Karton, A.; Martin, J. M. L. J. Phys. Chem. A 2009, 113, 11974], we showed that conformer energies in alkanes (and other systems) are highly dispersion-driven and that uncorrected DFT functionals fail badly at reproducing them, while simple empirical dispersion corrections tend to overcorrect. To gain greater insight into the nature of the phenomenon, we have mapped the torsional surface of n-pentane to 10-degree resolution at the CCSD(T)-F12 level near the basis set limit. The data obtained have been decomposed by order of perturbation theory, excitation level, and same-spin vs opposite-spin character. A large number of approximate electronic structure methods have been considered, as well as several empirical dispersion corrections. Our chief conclusions are as follows: (a) the effect of dispersion is dominated by same-spin correlation (or triplet-pair correlation, from a different perspective); (b) singlet-pair correlation is important for the surface, but qualitatively very dissimilar to the dispersion component; (c) single and double excitations beyond third order are essentially unimportant for this surface; (d) connected triple excitations do play a role but are statistically very similar to the MP2 singlet-pair correlation; (e) the form of the damping function is crucial for good performance of empirical dispersion corrections; (f) at least in the lower-energy regions, SCS-MP2 and especially MP2.5 perform very well; (g) novel spin-component scaled double hybrid functionals such as DSD-PBEP86-D2 acquit themselves very well for this problem.
Salloch, Sabine; Schildmann, Jan; Vollmann, Jochen
2012-04-13
The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis.
2012-01-01
Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496
Non-imaging ray-tracing for sputtering simulation with apodization
NASA Astrophysics Data System (ADS)
Ou, Chung-Jen
2018-04-01
Although apodization patterns have been adopted for the analysis of sputtering sources, the analytical solutions for the film thickness equations are yet limited to only simple conditions. Empirical formulations for thin film sputtering lacking the flexibility in dealing with multi-substrate conditions, a suitable cost-effective procedure is required to estimate the film thickness distribution. This study reports a cross-discipline simulation program, which is based on discrete particle Monte-Carlo methods and has been successfully applied to a non-imaging design to solve problems associated with sputtering uniformity. Robustness of the present method is first proved by comparing it with a typical analytical solution. Further, this report also investigates the overall all effects cause by the sizes of the deposited substrate, such that the determination of the distance between the target surface and the apodization index can be complete. This verifies the capability of the proposed method for solving the sputtering film thickness problems. The benefit is that an optical thin film engineer can, using the same optical software, design a specific optical component and consider the possible coating qualities with thickness tolerance, during the design stage.
A genetic-algorithm approach for assessing the liquefaction potential of sandy soils
NASA Astrophysics Data System (ADS)
Sen, G.; Akyol, E.
2010-04-01
The determination of liquefaction potential is required to take into account a large number of parameters, which creates a complex nonlinear structure of the liquefaction phenomenon. The conventional methods rely on simple statistical and empirical relations or charts. However, they cannot characterise these complexities. Genetic algorithms are suited to solve these types of problems. A genetic algorithm-based model has been developed to determine the liquefaction potential by confirming Cone Penetration Test datasets derived from case studies of sandy soils. Software has been developed that uses genetic algorithms for the parameter selection and assessment of liquefaction potential. Then several estimation functions for the assessment of a Liquefaction Index have been generated from the dataset. The generated Liquefaction Index estimation functions were evaluated by assessing the training and test data. The suggested formulation estimates the liquefaction occurrence with significant accuracy. Besides, the parametric study on the liquefaction index curves shows a good relation with the physical behaviour. The total number of misestimated cases was only 7.8% for the proposed method, which is quite low when compared to another commonly used method.
Non-imaging ray-tracing for sputtering simulation with apodization
NASA Astrophysics Data System (ADS)
Ou, Chung-Jen
2018-06-01
Although apodization patterns have been adopted for the analysis of sputtering sources, the analytical solutions for the film thickness equations are yet limited to only simple conditions. Empirical formulations for thin film sputtering lacking the flexibility in dealing with multi-substrate conditions, a suitable cost-effective procedure is required to estimate the film thickness distribution. This study reports a cross-discipline simulation program, which is based on discrete particle Monte-Carlo methods and has been successfully applied to a non-imaging design to solve problems associated with sputtering uniformity. Robustness of the present method is first proved by comparing it with a typical analytical solution. Further, this report also investigates the overall all effects cause by the sizes of the deposited substrate, such that the determination of the distance between the target surface and the apodization index can be complete. This verifies the capability of the proposed method for solving the sputtering film thickness problems. The benefit is that an optical thin film engineer can, using the same optical software, design a specific optical component and consider the possible coating qualities with thickness tolerance, during the design stage.
NASA Technical Reports Server (NTRS)
Smalley, A. J.; Tessarzik, J. M.
1975-01-01
Effects of temperature, dissipation level and geometry on the dynamic behavior of elastomer elements were investigated. Force displacement relationships in elastomer elements and the effects of frequency, geometry and temperature upon these relationships are reviewed. Based on this review, methods of reducing stiffness and damping data for shear and compression test elements to material properties (storage and loss moduli) and empirical geometric factors are developed and tested using previously generated experimental data. A prediction method which accounts for large amplitudes of deformation is developed on the assumption that their effect is to increase temperature through the elastomers, thereby modifying the local material properties. Various simple methods of predicting the radial stiffness of ring cartridge elements are developed and compared. Material properties were determined from the shear specimen tests as a function of frequency and temperature. Using these material properties, numerical predictions of stiffness and damping for cartridge and compression specimens were made and compared with corresponding measurements at different temperatures, with encouraging results.
Recent trends in counts of migrant hawks from northeastern North America
Titus, K.; Fuller, M.R.
1990-01-01
Using simple regression, pooled-sites route-regression, and nonparametric rank-trend analyses, we evaluated trends in counts of hawks migrating past 6 eastern hawk lookouts from 1972 to 1987. The indexing variable was the total count for a season. Bald eagle (Haliaeetus leucocephalus), peregrine falcon (Falco peregrinus), merlin (F. columbarius), osprey (Pandion haliaetus), and Cooper's hawk (Accipiter cooperii) counts increased using route-regression and nonparametric methods (P 0.10). We found no consistent trends (P > 0.10) in counts of sharp-shinned hawks (A. striatus), northern goshawks (A. gentilis) red-shouldered hawks (Buteo lineatus), red-tailed hawks (B. jamaicensis), rough-legged hawsk (B. lagopus), and American kestrels (F. sparverius). Broad-winged hawk (B. platypterus) counts declined (P < 0.05) based on the route-regression method. Empirical comparisons of our results with those for well-studied species such as the peregrine falcon, bald eagle, and osprey indicated agreement with nesting surveys. We suggest that counts of migrant hawks are a useful and economical method for detecting long-term trends in species across regions, particularly for species that otherwise cannot be easily surveyed.
ζ Oph and the weak-wind problem
NASA Astrophysics Data System (ADS)
Gvaramadze, V. V.; Langer, N.; Mackey, J.
2012-11-01
Mass-loss rate, ?, is one of the key parameters affecting evolution and observational manifestations of massive stars and their impact on the ambient medium. Despite its importance, there is a factor of ˜100 discrepancy between empirical and theoretical ? of late-type O dwarfs, the so-called weak-wind problem. In this Letter, we propose a simple novel method to constrain ? of runaway massive stars through observation of their bow shocks and Strömgren spheres, which might be of decisive importance for resolving the weak-wind problem. Using this method, we found that ? of the well-known runaway O9.5 V star ζ Oph is more than an order of magnitude higher than that derived from ultraviolet (UV) line fitting and is by a factor of 6-7 lower than those based on the theoretical recipe by Vink et al. and the Hα line. The discrepancy between ? derived by our method and that based on UV lines would be even more severe if the stellar wind is clumpy. At the same time, our estimate of ? agrees with that predicted by the moving reversing layer theory by Lucy.
NASA Technical Reports Server (NTRS)
Mansur, M. Hossein; Tischler, Mark B.
1997-01-01
Historically, component-type flight mechanics simulation models of helicopters have been unable to satisfactorily predict the roll response to pitch stick input and the pitch response to roll stick input off-axes responses. In the study presented here, simple first-order low-pass filtering of the elemental lift and drag forces was considered as a means of improving the correlation. The method was applied to a blade-element model of the AH-64 APache, and responses of the modified model were compared with flight data in hover and forward flight. Results indicate that significant improvement in the off-axes responses can be achieved in hover. In forward flight, however, the best correlation in the longitudinal and lateral off-axes responses required different values of the filter time constant for each axis. A compromise value was selected and was shown to result in good overall improvement in the off-axes responses. The paper describes both the method and the model used for its implementation, and presents results obtained at hover and in forward flight.
Scaling laws between population and facility densities.
Um, Jaegon; Son, Seung-Woo; Lee, Sung-Ik; Jeong, Hawoong; Kim, Beom Jun
2009-08-25
When a new facility like a grocery store, a school, or a fire station is planned, its location should ideally be determined by the necessities of people who live nearby. Empirically, it has been found that there exists a positive correlation between facility and population densities. In the present work, we investigate the ideal relation between the population and the facility densities within the framework of an economic mechanism governing microdynamics. In previous studies based on the global optimization of facility positions in minimizing the overall travel distance between people and facilities, it was shown that the density of facility D and that of population rho should follow a simple power law D approximately rho(2/3). In our empirical analysis, on the other hand, the power-law exponent alpha in D approximately rho(alpha) is not a fixed value but spreads in a broad range depending on facility types. To explain this discrepancy in alpha, we propose a model based on economic mechanisms that mimic the competitive balance between the profit of the facilities and the social opportunity cost for populations. Through our simple, microscopically driven model, we show that commercial facilities driven by the profit of the facilities have alpha = 1, whereas public facilities driven by the social opportunity cost have alpha = 2/3. We simulate this model to find the optimal positions of facilities on a real U.S. map and show that the results are consistent with the empirical data.
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
The Development Evaluation of Economic Zones in China.
Liu, Wei; Shi, Hong-Bo; Zhang, Zhe; Tsai, Sang-Bing; Zhai, Yuming; Chen, Quan; Wang, Jiangtao
2018-01-02
After the Chinese reform and opening up, the construction of economic zones, such as Special Economic Zones, Hi-tech Zones and Bonded Zones, has played an irreplaceable role in China's economic development. Currently, against the background of Chinese economic transition, research on development evaluation of economic zones has become popular and necessary. Similar research usually focuses on one specific field, and the methods that are used to evaluate it are simple. This research aims to analyse the development evaluation of zones by synthesis. A new hybrid multiple criteria decision making (MCDM) model that combines the DEMATEL technique and the DANP method is proposed. After establishing the evaluation criterion system and acquiring data, the influential weights of dimensions and criteria can be calculated, which will be a guide for forming measures of development. Shandong Peninsula Blue Economic Zone is used in the empirical case analysis. The results show that Transportation Conditions, Industrial Structure and Business Climate are the main influencing criteria and measures based on these criteria are proposed.
Support Vector Data Descriptions and k-Means Clustering: One Class?
Gornitz, Nico; Lima, Luiz Alberto; Muller, Klaus-Robert; Kloft, Marius; Nakajima, Shinichi
2017-09-27
We present ClusterSVDD, a methodology that unifies support vector data descriptions (SVDDs) and k-means clustering into a single formulation. This allows both methods to benefit from one another, i.e., by adding flexibility using multiple spheres for SVDDs and increasing anomaly resistance and flexibility through kernels to k-means. In particular, our approach leads to a new interpretation of k-means as a regularized mode seeking algorithm. The unifying formulation further allows for deriving new algorithms by transferring knowledge from one-class learning settings to clustering settings and vice versa. As a showcase, we derive a clustering method for structured data based on a one-class learning scenario. Additionally, our formulation can be solved via a particularly simple optimization scheme. We evaluate our approach empirically to highlight some of the proposed benefits on artificially generated data, as well as on real-world problems, and provide a Python software package comprising various implementations of primal and dual SVDD as well as our proposed ClusterSVDD.
The Development Evaluation of Economic Zones in China
Shi, Hong-Bo; Zhang, Zhe; Zhai, Yuming; Chen, Quan; Wang, Jiangtao
2018-01-01
After the Chinese reform and opening up, the construction of economic zones, such as Special Economic Zones, Hi-tech Zones and Bonded Zones, has played an irreplaceable role in China’s economic development. Currently, against the background of Chinese economic transition, research on development evaluation of economic zones has become popular and necessary. Similar research usually focuses on one specific field, and the methods that are used to evaluate it are simple. This research aims to analyse the development evaluation of zones by synthesis. A new hybrid multiple criteria decision making (MCDM) model that combines the DEMATEL technique and the DANP method is proposed. After establishing the evaluation criterion system and acquiring data, the influential weights of dimensions and criteria can be calculated, which will be a guide for forming measures of development. Shandong Peninsula Blue Economic Zone is used in the empirical case analysis. The results show that Transportation Conditions, Industrial Structure and Business Climate are the main influencing criteria and measures based on these criteria are proposed. PMID:29301304
Self-consistent approach for neutral community models with speciation
NASA Astrophysics Data System (ADS)
Haegeman, Bart; Etienne, Rampal S.
2010-03-01
Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.
Rosen, G D
2006-06-01
Meta-analysis is a vague descriptor used to encompass very diverse methods of data collection analysis, ranging from simple averages to more complex statistical methods. Holo-analysis is a fully comprehensive statistical analysis of all available data and all available variables in a specified topic, with results expressed in a holistic factual empirical model. The objectives and applications of holo-analysis include software production for prediction of responses with confidence limits, translation of research conditions to praxis (field) circumstances, exposure of key missing variables, discovery of theoretically unpredictable variables and interactions, and planning future research. Holo-analyses are cited as examples of the effects on broiler feed intake and live weight gain of exogenous phytases, which account for 70% of variation in responses in terms of 20 highly significant chronological, dietary, environmental, genetic, managemental, and nutrient variables. Even better future accountancy of variation will be facilitated if and when authors of papers routinely provide key data for currently neglected variables, such as temperatures, complete feed formulations, and mortalities.
DOT National Transportation Integrated Search
1997-05-01
Current pavement design procedures are based principally on empirical approaches. The current trend toward developing more mechanistic-empirical type pavement design methods led Minnesota to develop the Minnesota Road Research Project (Mn/ROAD), a lo...
Cashin, Cheryl; Phuong, Nguyen Khanh; Shain, Ryan; Oanh, Tran Thi Mai; Thuy, Nguyen Thi
2015-01-01
Vietnam is currently considering a revision of its 2008 Health Insurance Law, including the regulation of provider payment methods. This study uses a simple spreadsheet-based, micro-simulation model to analyse the potential impacts of different provider payment reform scenarios on resource allocation across health care providers in three provinces in Vietnam, as well as on the total expenditure of the provincial branches of the public health insurance agency (Provincial Social Security [PSS]). The results show that currently more than 50% of PSS spending is concentrated at the provincial level with less than half at the district level. There is also a high degree of financial risk on district hospitals with the current fund-holding arrangement. Results of the simulation model show that several alternative scenarios for provider payment reform could improve the current payment system by reducing the high financial risk currently borne by district hospitals without dramatically shifting the current level and distribution of PSS expenditure. The results of the simulation analysis provided an empirical basis for health policy-makers in Vietnam to assess different provider payment reform options and make decisions about new models to support health system objectives.
Spatial analysis of cities using Renyi entropy and fractal parameters
NASA Astrophysics Data System (ADS)
Chen, Yanguang; Feng, Jian
2017-12-01
The spatial distributions of cities fall into two groups: one is the simple distribution with characteristic scale (e.g. exponential distribution), and the other is the complex distribution without characteristic scale (e.g. power-law distribution). The latter belongs to scale-free distributions, which can be modeled with fractal geometry. However, fractal dimension is not suitable for the former distribution. In contrast, spatial entropy can be used to measure any types of urban distributions. This paper is devoted to generalizing multifractal parameters by means of dual relation between Euclidean and fractal geometries. The main method is mathematical derivation and empirical analysis, and the theoretical foundation is the discovery that the normalized fractal dimension is equal to the normalized entropy. Based on this finding, a set of useful spatial indexes termed dummy multifractal parameters are defined for geographical analysis. These indexes can be employed to describe both the simple distributions and complex distributions. The dummy multifractal indexes are applied to the population density distribution of Hangzhou city, China. The calculation results reveal the feature of spatio-temporal evolution of Hangzhou's urban morphology. This study indicates that fractal dimension and spatial entropy can be combined to produce a new methodology for spatial analysis of city development.
A univariate model of river water nitrate time series
NASA Astrophysics Data System (ADS)
Worrall, F.; Burt, T. P.
1999-01-01
Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the earthquake hazard and endorse the use of all credible earthquake probability models for the region, including the empirical model, with appropriate weighting, as was done in WGCEP (2002).
A simple empirical model for the clarification-thickening process in wastewater treatment plants.
Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T
2015-01-01
In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p < 0.001). The model requires application of only one parameter, the sludge volume index (SVI), which is readily incorporated into routine analysis. By combining this model with the conservation of mass equation, an empirical model for compression settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.
Stirling Analysis Comparison of Commercial vs. High-Order Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2007-01-01
Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/ proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's Compact scheme and Dyson s Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model although sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.
Stirling Analysis Comparison of Commercial Versus High-Order Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2005-01-01
Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's compact scheme and Dyson's Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model with sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.
Fight the power: the limits of empiricism and the costs of positivistic rigor.
Indick, William
2002-01-01
A summary of the influence of positivistic philosophy and empiricism on the field of psychology is followed by a critique of the empirical method. The dialectic process is advocated as an alternative method of inquiry. The main advantage of the dialectic method is that it is open to any logical argument, including empirical hypotheses, but unlike empiricism, it does not automatically reject arguments that are not based on observable data. Evolutionary and moral psychology are discussed as examples of important fields of study that could benefit from types of arguments that frequently do not conform to the empirical standards of systematic observation and falsifiability of hypotheses. A dialectic method is shown to be a suitable perspective for those fields of research, because it allows for logical arguments that are not empirical and because it fosters a functionalist perspective, which is indispensable for both evolutionary and moral theories. It is suggested that all psychologists may gain from adopting a dialectic approach, rather than restricting themselves to empirical arguments alone.
The Use of Empirical Methods for Testing Granular Materials in Analogue Modelling
Montanari, Domenico; Agostini, Andrea; Bonini, Marco; Corti, Giacomo; Del Ventisette, Chiara
2017-01-01
The behaviour of a granular material is mainly dependent on its frictional properties, angle of internal friction, and cohesion, which, together with material density, are the key factors to be considered during the scaling procedure of analogue models. The frictional properties of a granular material are usually investigated by means of technical instruments such as a Hubbert-type apparatus and ring shear testers, which allow for investigating the response of the tested material to a wide range of applied stresses. Here we explore the possibility to determine material properties by means of different empirical methods applied to mixtures of quartz and K-feldspar sand. Empirical methods exhibit the great advantage of measuring the properties of a certain analogue material under the experimental conditions, which are strongly sensitive to the handling techniques. Finally, the results obtained from the empirical methods have been compared with ring shear tests carried out on the same materials, which show a satisfactory agreement with those determined empirically. PMID:28772993
NASA Astrophysics Data System (ADS)
Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.
2015-12-01
We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).
[Is it possible a bioethics based on the experimental evidence?].
Pastor, Luis Miguel
2013-01-01
For years there are different types of criticism about principialist bioethics. One alternative that has been proposed is to introduce empirical evidence within the bioethical discourse to make it less formal, less theoretical and closer to reality. In this paper we analyze first in synthetic form diverse alternative proposals to make an empirical bioethics. Some of them are strongly naturalistic while others aim to provide empirical data only for correct or improve bioethical work. Most of them are not shown in favor of maintaining a complete separation between facts and values, between what is and what ought to be. With different nuances these proposals of moderate naturalism make ethical judgments depend normative social opinion resulting into a certain social naturalism. Against these proposals we think to make a bioethics in that relates the empirical facts with ethical duties, we must rediscover empirical reality of human action. Only from it and, in particular, from the activity of discernment that makes practical reason, when judged on the object of his action, it is possible to integrate the mere descriptive facts with ethical judgments of character prescriptive. In conclusion we think that it is not possible to perform bioethics a mode of empirical science, as this would be contrary to natural reason, leading to a sort of scientific reductionism. At the same time we believe that empirical data are important in the development of bioethics and to enhance and improve the innate ability of human reason to discern good. From this discernment could develop a bioethics from the perspective of ethical agents themselves, avoiding the extremes of an excessive normative rationalism, accepting empirical data and not falling into a simple pragmatism.
Empirical deck for phased construction and widening [summary].
DOT National Transportation Integrated Search
2017-06-01
The most common method used to design and analyze bridge decks, termed the traditional : method, treats a deck slab as if it were made of strips supported by inflexible girders. An : alternative the empirical method treats the deck slab as a ...
A DEIM Induced CUR Factorization
2015-09-18
CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given
The dynamics of adapting, unregulated populations and a modified fundamental theorem.
O'Dwyer, James P
2013-01-06
A population in a novel environment will accumulate adaptive mutations over time, and the dynamics of this process depend on the underlying fitness landscape: the fitness of and mutational distance between possible genotypes in the population. Despite its fundamental importance for understanding the evolution of a population, inferring this landscape from empirical data has been problematic. We develop a theoretical framework to describe the adaptation of a stochastic, asexual, unregulated, polymorphic population undergoing beneficial, neutral and deleterious mutations on a correlated fitness landscape. We generate quantitative predictions for the change in the mean fitness and within-population variance in fitness over time, and find a simple, analytical relationship between the distribution of fitness effects arising from a single mutation, and the change in mean population fitness over time: a variant of Fisher's 'fundamental theorem' which explicitly depends on the form of the landscape. Our framework can therefore be thought of in three ways: (i) as a set of theoretical predictions for adaptation in an exponentially growing phase, with applications in pathogen populations, tumours or other unregulated populations; (ii) as an analytically tractable problem to potentially guide theoretical analysis of regulated populations; and (iii) as a basis for developing empirical methods to infer general features of a fitness landscape.
Fang, Huaming; Zhang, Peng; Huang, Lisa P.; Zhao, Zhengyi; Pi, Fengmei; Montemagno, Carlo; Guo, Peixuan
2014-01-01
Living systems produce ordered structures and nanomachines that inspire the development of biomimetic nanodevices such as chips, MEMS, actuators, sensors, sorters, and apparatuses for single-pore DNA sequencing, disease diagnosis, drug or therapeutic RNA delivery. Determination of the copy numbers of subunits that build these machines is challenging due to small size. Here we report a simple mathematical method to determine the stoichiometry, using phi29 DNA-packaging nanomotor as a model to elucidate the application of a formula ∑M=0Z(ZM)pZ−MqM, where p and q are the percentage of wild-type and inactive mutant in the empirical assay; M is the copy numbers of mutant and Z is the stoichiometry in question. Variable ratios of mutants and wild-type were mixed to inhibit motor function. Empirical data were plotted over the theoretical curves to determine the stoichiometry and the value of K, which is the number of mutant needed in each machine to block the function, all based on the condition that wild-type and mutant are equal in binding affinity. Both Z and K from 1–12 were investigated. The data precisely confirmed that phi29 motor contains six copies (Z) of the motor ATPase gp16, and K = 1. PMID:24650885
Benassi, Enrico
2017-01-15
A number of programs and tools that simulate 1 H and 13 C nuclear magnetic resonance (NMR) chemical shifts using empirical approaches are available. These tools are user-friendly, but they provide a very rough (and sometimes misleading) estimation of the NMR properties, especially for complex systems. Rigorous and reliable ways to predict and interpret NMR properties of simple and complex systems are available in many popular computational program packages. Nevertheless, experimentalists keep relying on these "unreliable" tools in their daily work because, to have a sufficiently high accuracy, these rigorous quantum mechanical methods need high levels of theory. An alternative, efficient, semi-empirical approach has been proposed by Bally, Rablen, Tantillo, and coworkers. This idea consists of creating linear calibrations models, on the basis of the application of different combinations of functionals and basis sets. Following this approach, the predictive capability of a wider range of popular functionals was systematically investigated and tested. The NMR chemical shifts were computed in solvated phase at density functional theory level, using 30 different functionals coupled with three different triple-ζ basis sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Response functions for neutron skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-02-01
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less
Working Memory and Intelligence Are Highly Related Constructs, but Why?
ERIC Educational Resources Information Center
Colom, Roberto; Abad, Francisco J.; Quiroga, M. Angeles; Shih, Pei Chun; Flores-Mendoza, Carmen
2008-01-01
Working memory and the general factor of intelligence (g) are highly related constructs. However, we still don't know why. Some models support the central role of simple short-term storage, whereas others appeal to executive functions like the control of attention. Nevertheless, the available empirical evidence does not suffice to get an answer,…
Writing, Emotion, and the Brain: What Graduate School Taught Me about Healing.
ERIC Educational Resources Information Center
Brand, Alice G.
The trajectory of an English professor's scholarly interests has always involved emotion. From the simple question she asked herself during her graduate study (what do we feel?), she moved to the beneficial psychological effects of writing, then onto empirically identifying the emotions involved in writing, to discussions of social emotions, to an…
Clausius-Clapeyron Equation and Saturation Vapour Pressure: Simple Theory Reconciled with Practice
ERIC Educational Resources Information Center
Koutsoyiannis, Demetris
2012-01-01
While the Clausius-Clapeyron equation is very important as it determines the saturation vapour pressure, in practice it is replaced by empirical, typically Magnus-type, equations which are more accurate. It is shown that the reduced accuracy reflects an inconsistent assumption that the latent heat of vaporization is constant. Not only is this…
Stress Management Strategies of Secondary School Teachers in Nigeria. Short Report
ERIC Educational Resources Information Center
Arikewuyo, M. Olalekan
2004-01-01
The study provides empirical evidence for the management of stress by teachers of secondary schools in Nigeria. A total of 3466 teachers, drawn from secondary schools in Ogun State of Nigeria, returned their questionnaire for the study. Data were analysed using simple percentage and chi-square. The findings indicate that teachers frequently use…
A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses
USDA-ARS?s Scientific Manuscript database
Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...
Fire spread characteristics determined in the laboratory
Richard C. Rothermel; Hal E. Anderson
1966-01-01
Fuel beds of ponderosa pine needles and white pine needles were burned under controlled environmental conditions to determine the effects of fuel moisture and windspeed upon the rate of fire spread. Empirical formulas are presented to show the effect of these parameters. A discussion of rate of spread and some simple experiments show how fuel may be preheated before...
Electronic Structure in Pi Systems: Part I. Huckel Theory with Electron Repulsion.
ERIC Educational Resources Information Center
Fox, Marye Anne; Matsen, F. A.
1985-01-01
Pi-CI theory is a simple, semi-empirical procedure which (like Huckel theory) treats pi and pseudo-pi orbitals; in addition, electron repulsion is explicitly included and molecular configurations are mixed. Results obtained from application of pi-CI to ethylene are superior to either the Huckel molecular orbital or valence bond theories. (JN)
NASA Technical Reports Server (NTRS)
Sengers, J. V.; Basu, R. S.; Sengers, J. M. H. L.
1981-01-01
A survey is presented of representative equations for various thermophysical properties of fluids in the critical region. Representative equations for the transport properties are included. Semi-empirical modifications of the theoretically predicted asymtotic critical behavior that yield simple and practical representations of the fluid properties in the critical region are emphasized.
I present a simple, macroecological model of fish abundance that was used to estimate the total number of non-migratory salmonids within the Willamette River Basin (western Oregon). The model begins with empirical point estimates of net primary production (NPP in g C/m2) in fore...
40 CFR Appendix C to Part 75 - Missing Data Estimation Procedures
Code of Federal Regulations, 2010 CFR
2010-07-01
... certification of a parametric, empirical, or process simulation method or model for calculating substitute data... available process simulation methods and models. 1.2Petition Requirements Continuously monitor, determine... desulfurization, a corresponding empirical correlation or process simulation parametric method using appropriate...
An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers
Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.
2016-01-01
Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.
Application of Tight-Binding Method in Atomistic Simulation of Covalent Materials
NASA Astrophysics Data System (ADS)
Isik, Ahmet
1994-05-01
The primary goal of this thesis is to develop and apply molecular dynamics simulation methods to elemental and binary covalent materials (Si, C, SiC) based on the tight-binding (TB) model of atomic cohesion in studies of bulk and deformation properties far from equilibrium. A second purpose is to compare results with those obtained using empirical interatomic potential functions in order to elucidate the applicability of models of interatomic interactions which do not take into account explicitly electronic structure effects. We have calculated the former by using a basis set consisting of four atomic orbitals, one for the s state and three for the p states, constructing a TB Hamiltonian in the usual Slater-Koster parametrization, and diagonalizing the Hamiltonian matrix at the origin of the Brillouin zone. For the repulsive part of the energy we employ a function in the form of inverse power law with screening which is then fitted to the bulk modulus and lattice parameter of several stable polytypes, results calculated by ab initio methods in the literature. Three types of applications have been investigated to demonstrate the utility of the present TB models and their advantages relative to empirical potentials. In the case of Si we show the calculated cohesive energy agrees to within a few percent with the ab initio local-density approximation (LDA) results. In addition, for clusters up to 10 atoms we find most of the energies and equilibrium structures to be in good agreement with LDA results (the failure of the empirical potential of Stillinger and Weber (SW) is well known). In the case of C clusters our TB model gives ring and chain structures which have been found both experimentally and by LDA calculations. In the second application we have applied our TB model of Si to investigate the core structure and energetics of partial dislocations on the glide plane and reconstruction antiphase defect (APD). For the 90^circ partial we show that the TB description gives the correct asymetric reconstruction previously found by LDA. For the 30^circ partial, TB gives better bond angles in the dislocation core. For the APD we have obtained a binding energy and activation for migration which are somewhat larger than the SW values, but the conclusion remains that APD is a low-energy defect which should be quite mobile. In the third application we formulate a simple TB model for SiC where the coefficients of the two-center integrals in Si-C interactions are taken to be simple averages of Si-Si and C-C integrals. Fitting is done on two polytypes, zincblende and rocksalt structures, and a simulated annealing procedure is used. The TB results are found in good agreement with LDA and experimental results in the cohesive energy, acoustic phonon modes, and elastic constants. (Abstract shortened by UMI.).
Energy-conserving impact algorithm for the heel-strike phase of gait.
Kaplan, M L; Heegaard, J H
2000-06-01
Significant ground reaction forces exceeding body weight occur during the heel-strike phase of gait. The standard methods of analytical dynamics used to solve the impact problem do not accommodate well the heel-strike collision due to the persistent contact at the front foot and presence of contact at the back foot. These methods can cause a non-physical energy gain on the order of the total kinetic energy of the system at impact. Additionally, these standard techniques do not quantify the contact force, but the impulse over the impact. We present an energy-conserving impact algorithm based on the penalty method to solve for the ground reaction forces during gait. The rigid body assumptions are relaxed and the bodies are allowed to penetrate one another to a small degree. Associated with the deformation is a potential, from which the contact forces are derived. The empirical coefficient-of-restitution used in the standard approaches is replaced by two parameters to characterize the stiffness and the damping of the materials. We solve two simple heel-strike models to illustrate the shortcomings of a standard approach and the suitability of the proposed method for use with gait.
A facile and efficient approach for pore-opening detection of anodic aluminum oxide membranes
NASA Astrophysics Data System (ADS)
Cui, Jiewu; Wu, Yucheng; Wang, Yan; Zheng, Hongmei; Xu, Guangqing; Zhang, Xinyi
2012-05-01
The well aligned porous anodic aluminum oxide (AAO) membrane is fabricated by a two-step anodization method. The oxide barrier layer of AAO membrane must be removed to get through-hole membrane for synthesizing nanowires and nanotubes of metals, semiconductors and conducting polymers. Removal of the barrier layer of oxide and pore-extending is of significant importance for the preparation of AAO membrane with through-hole pore morphology and desired pore diameter. The conventional method for pore opening is that AAO membrane after removing of aluminum substrate is immersed in chemical etching solution, which is completely empirical and results in catastrophic damage for AAO membrane frequently. A very simple and efficient approach based on capillary action for detecting pore opening of AAO membrane is introduced in this paper, this method can achieve the detection for pore opening visually and control the pore diameter precisely to get desired morphology and the pore diameter of AAO membrane. Two kinds of AAO membranes with different pore shape were obtained by different pore opening methods. In addition, one-dimensional gradient gold nanowires are also fabricated by electrodeposition based on AAO membranes.
Long, Keith R.; Singer, Donald A.
2001-01-01
Determining the economic viability of mineral deposits of various sizes and grades is a critical task in all phases of mineral supply, from land-use management to mine development. This study evaluates two simple tools for estimating the economic viability of porphyry copper deposits mined by open-pit, heap-leach methods when only limited information on these deposits is available. These two methods are useful for evaluating deposits that either (1) are undiscovered deposits predicted by a mineral resource assessment, or (2) have been discovered but for which little data has been collected or released. The first tool uses ordinary least-squared regression analysis of cost and operating data from selected deposits to estimate a predictive relationship between mining rate, itself estimated from deposit size, and capital and operating costs. The second method uses cost models developed by the U.S. Bureau of Mines (Camm, 1991) updated using appropriate cost indices. We find that the cost model method works best for estimating capital costs and the empirical model works best for estimating operating costs for mines to be developed in the United States.
An approach to define semantics for BPM systems interoperability
NASA Astrophysics Data System (ADS)
Rico, Mariela; Caliusco, María Laura; Chiotti, Omar; Rosa Galli, María
2015-04-01
This article proposes defining semantics for Business Process Management systems interoperability through the ontology of Electronic Business Documents (EBD) used to interchange the information required to perform cross-organizational processes. The semantic model generated allows aligning enterprise's business processes to support cross-organizational processes by matching the business ontology of each business partner with the EBD ontology. The result is a flexible software architecture that allows dynamically defining cross-organizational business processes by reusing the EBD ontology. For developing the semantic model, a method is presented, which is based on a strategy for discovering entity features whose interpretation depends on the context, and representing them for enriching the ontology. The proposed method complements ontology learning techniques that can not infer semantic features not represented in data sources. In order to improve the representation of these entity features, the method proposes using widely accepted ontologies, for representing time entities and relations, physical quantities, measurement units, official country names, and currencies and funds, among others. When the ontologies reuse is not possible, the method proposes identifying whether that feature is simple or complex, and defines a strategy to be followed. An empirical validation of the approach has been performed through a case study.
Ghosting reduction method for color anaglyphs
NASA Astrophysics Data System (ADS)
Chang, An Jin; Kim, Hye Jin; Choi, Jae Wan; Yu, Ki Yun
2008-02-01
Anaglyph is the simplest and the most economical method for 3D visualization. However, anaglyph has several drawbacks such as loss of color or visual discomfort, e.g., region merging and the ghosting effect. In particular, the ghosting effect, which is caused by green penetrating to the left eye, brings on a slight headache, dizziness and vertigo. Therefore, ghosting effects have to be reduced to improve the visual quality and make viewing of the anaglyph comfortable. Since red lightness is increased by penetration by green, the lightness of the red band has to be compensated for. In this paper, a simple deghosting method is proposed using the red lightness difference of the left and right images. We detected a ghosting area with the criterion, which was calculated from the statistics of the difference image, and then the red lightness of the anaglyph was changed to be brighter or darker according to the degree of the difference. The amount of change of red lightness was determined empirically. These adjustments simultaneously reduced the ghosting effect and preserved the color lightness within the non-ghosting area. The proposed deghosting method works well, and the goal of this paper was to detect the ghosting area automatically and to reduce the ghosting.
NASA Astrophysics Data System (ADS)
Ong, Hiap Liew; Meyer, Robert B.; Hurd, Alan J.
1984-04-01
The effects of a short-range, arbitrary strength interfacial potential on the magnetic field, electric field, and optical field induced Freedericksz transition in a nematic liquid crystal cell are examined and the exact solution is obtained. By generalizing the criterion for the existence of a first-order optical field induced Freedericksz transition that was obtained previously [H. L. Ong, Phys. Rev. A 28, 2393 (1983)], the general criterion for the transition to be first order is obtained. Based on the existing experimental results, the possibility of surface induced first-order transitions is discussed and three simple empirical approaches are suggested for observing multistable orientation. The early results on the magnetic and electric fields induced Freedericksz transition and the inadequacy of the usual experimental observation methods (phase shift and capacitance measurements) are also discussed.
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Biological materials by design.
Qin, Zhao; Dimas, Leon; Adler, David; Bratzel, Graham; Buehler, Markus J
2014-02-19
In this topical review we discuss recent advances in the use of physical insight into the way biological materials function, to design novel engineered materials 'from scratch', or from the level of fundamental building blocks upwards and by using computational multiscale methods that link chemistry to material function. We present studies that connect advances in multiscale hierarchical material structuring with material synthesis and testing, review case studies of wood and other biological materials, and illustrate how engineered fiber composites and bulk materials are designed, modeled, and then synthesized and tested experimentally. The integration of experiment and simulation in multiscale design opens new avenues to explore the physics of materials from a fundamental perspective, and using complementary strengths from models and empirical techniques. Recent developments in this field illustrate a new paradigm by which complex material functionality is achieved through hierarchical structuring in spite of simple material constituents.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations
NASA Astrophysics Data System (ADS)
Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik
2009-04-01
Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.
Evaluation of Preduster in Cement Industry Based on Computational Fluid Dynamic
NASA Astrophysics Data System (ADS)
Septiani, E. L.; Widiyastuti, W.; Djafaar, A.; Ghozali, I.; Pribadi, H. M.
2017-10-01
Ash-laden hot air from clinker in cement industry is being used to reduce water contain in coal, however it may contain large amount of ash even though it was treated by a preduster. This study investigated preduster performance as a cyclone separator in the cement industry by Computational Fluid Dynamic method. In general, the best performance of cyclone is it have relatively high efficiency with the low pressure drop. The most accurate and simple turbulence model, Reynold Average Navier Stokes (RANS), standard k-ε, and combination with Lagrangian model as particles tracking model were used to solve the problem. The measurement in simulation result are flow pattern in the cyclone, pressure outlet and collection efficiency of preduster. The applied model well predicted by comparing with the most accurate empirical model and pressure outlet in experimental measurement.
NASA Astrophysics Data System (ADS)
Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico
2017-11-01
The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).
A preliminary study of muscular artifact cancellation in single-channel EEG.
Chen, Xun; Liu, Aiping; Peng, Hu; Ward, Rabab K
2014-10-01
Electroencephalogram (EEG) recordings are often contaminated with muscular artifacts that strongly obscure the EEG signals and complicates their analysis. For the conventional case, where the EEG recordings are obtained simultaneously over many EEG channels, there exists a considerable range of methods for removing muscular artifacts. In recent years, there has been an increasing trend to use EEG information in ambulatory healthcare and related physiological signal monitoring systems. For practical reasons, a single EEG channel system must be used in these situations. Unfortunately, there exist few studies for muscular artifact cancellation in single-channel EEG recordings. To address this issue, in this preliminary study, we propose a simple, yet effective, method to achieve the muscular artifact cancellation for the single-channel EEG case. This method is a combination of the ensemble empirical mode decomposition (EEMD) and the joint blind source separation (JBSS) techniques. We also conduct a study that compares and investigates all possible single-channel solutions and demonstrate the performance of these methods using numerical simulations and real-life applications. The proposed method is shown to significantly outperform all other methods. It can successfully remove muscular artifacts without altering the underlying EEG activity. It is thus a promising tool for use in ambulatory healthcare systems.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Simple versus complex models of trait evolution and stasis as a response to environmental change
NASA Astrophysics Data System (ADS)
Hunt, Gene; Hopkins, Melanie J.; Lidgard, Scott
2015-04-01
Previous analyses of evolutionary patterns, or modes, in fossil lineages have focused overwhelmingly on three simple models: stasis, random walks, and directional evolution. Here we use likelihood methods to fit an expanded set of evolutionary models to a large compilation of ancestor-descendant series of populations from the fossil record. In addition to the standard three models, we assess more complex models with punctuations and shifts from one evolutionary mode to another. As in previous studies, we find that stasis is common in the fossil record, as is a strict version of stasis that entails no real evolutionary changes. Incidence of directional evolution is relatively low (13%), but higher than in previous studies because our analytical approach can more sensitively detect noisy trends. Complex evolutionary models are often favored, overwhelmingly so for sequences comprising many samples. This finding is consistent with evolutionary dynamics that are, in reality, more complex than any of the models we consider. Furthermore, the timing of shifts in evolutionary dynamics varies among traits measured from the same series. Finally, we use our empirical collection of evolutionary sequences and a long and highly resolved proxy for global climate to inform simulations in which traits adaptively track temperature changes over time. When realistically calibrated, we find that this simple model can reproduce important aspects of our paleontological results. We conclude that observed paleontological patterns, including the prevalence of stasis, need not be inconsistent with adaptive evolution, even in the face of unstable physical environments.
Redundant correlation effect on personalized recommendation
NASA Astrophysics Data System (ADS)
Qiu, Tian; Han, Teng-Yue; Zhong, Li-Xin; Zhang, Zi-Ke; Chen, Guang
2014-02-01
The high-order redundant correlation effect is investigated for a hybrid algorithm of heat conduction and mass diffusion (HHM), through both heat conduction biased (HCB) and mass diffusion biased (MDB) correlation redundancy elimination processes. The HCB and MDB algorithms do not introduce any additional tunable parameters, but keep the simple character of the original HHM. Based on two empirical datasets, the Netflix and MovieLens, the HCB and MDB are found to show better recommendation accuracy for both the overall objects and the cold objects than the HHM algorithm. Our work suggests that properly eliminating the high-order redundant correlations can provide a simple and effective approach to accurate recommendation.
A study of a diffusive model of asset returns and an empirical analysis of financial markets
NASA Astrophysics Data System (ADS)
Alejandro Quinones, Angel Luis
A diffusive model for market dynamics is studied and the predictions of the model are compared to real financial markets. The model has a non-constant diffusion coefficient which depends both on the asset value and the time. A general solution for the distribution of returns is obtained and shown to match the results of computer simulations for two simple cases, piecewise linear and quadratic diffusion. The effects of discreteness in the market dynamics on the model are also studied. For the quadratic diffusion case, a type of phase transition leading to fat tails is observed as the discrete distribution approaches the continuum limit. It is also found that the model captures some of the empirical stylized facts observed in real markets, including fat-tails and scaling behavior in the distribution of returns. An analysis of empirical data for the EUR/USD currency exchange rate and the S&P 500 index is performed. Both markets show time scaling behavior consistent with a value of 1/2 for the Hurst exponent. Finally, the results show that the distribution of returns for the two markets is well fitted by the model, and the corresponding empirical diffusion coefficients are determined.
NASA Astrophysics Data System (ADS)
Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.
2018-03-01
With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.
Megías-Alguacil, David; Fischer, Peter; Windhab, Erich J
2004-06-15
We present experimental investigations on droplet deformation under simple shear flow conditions, using a computer-controlled parallel band apparatus and an optical device which allows us to record the time dependence of the droplet shape. Several methods are applied to determine the interfacial tension from the observed shape and relaxation mechanism. Specific software developed in our laboratory allows the droplet to be fixed in a certain position for extended times, in fact, indefinite. This is an advantage over most other work done in this area, where only limited time is available. In our experiments, the transient deformation of sheared droplets can be observed to reach the steady state. The measured systems were Newtonian, both droplet and fluid phase. Droplet deformation, orientation angle and retraction were studied and compared to several models. The interfacial tension of the different systems was calculated using the theories of Taylor, Rallison, and Hinch and Acrivos. The results obtained from the analysis of the droplet deformation were in very good agreement with drop detachment experiments of Feigl and co-workers. The study of orientation angle shows qualitative agreement to the theory of Hinch and Acrivos but reveals larger quantitative discrepancies for several empirical fitting parameters of the used model. Analysis of the relaxation of sheared drops provided estimates of the interfacial tension that were in very good agreement with the steady-state measurements.
Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L
2007-01-01
Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.
ERIC Educational Resources Information Center
Slezak, Diego Fernandez; Sigman, Mariano
2012-01-01
The time spent making a decision and its quality define a widely studied trade-off. Some models suggest that the time spent is set to optimize reward, as verified empirically in simple-decision making experiments. However, in a more complex perspective compromising components of regulation focus, ambitions, fear, risk and social variables,…
Describing dengue epidemics: Insights from simple mechanistic models
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.
2012-09-01
We present a set of nested models to be applied to dengue fever epidemiology. We perform a qualitative study in order to show how much complexity we really need to add into epidemiological models to be able to describe the fluctuations observed in empirical dengue hemorrhagic fever incidence data offering a promising perspective on inference of parameter values from dengue case notifications.
Rational and Empirical Play in the Simple Hot Potato Game
ERIC Educational Resources Information Center
Butts, Carter T.; Rode, David C.
2007-01-01
We define a "hot potato" to be a good that may be traded a finite number of times, but which becomes a bad if and when it can no longer be exchanged. We describe a game involving such goods, and show that non-acceptance is a unique subgame perfect Nash equilibrium for rational egoists. Contrastingly, experiments with human subjects show…
Wenchi Jin; Hong S. He; Frank R. Thompson
2016-01-01
Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...
Indicators as Judgment Devices: An Empirical Study of Citizen Bibliometrics in Research Evaluation
ERIC Educational Resources Information Center
Hammarfelt, Björn; Rushforth, Alexander D.
2017-01-01
A researcher's number of publications has been a fundamental merit in the competition for academic positions since the late 18th century. Today, the simple counting of publications has been supplemented with a whole range of bibliometric indicators, which supposedly not only measures the volume of research but also its impact. In this study, we…
Guthrie, T H; Mahizhnan, P
1983-01-01
A patient with acute nonlymphocytic leukemia developed progressive lung infiltrates and unremitting fevers during a profound neutropenic state. Legionnaire's disease was diagnosed by simple immunologic studies and successfully treated with erythromycin. This index case alerts physicians toward a treatable infection which would not normally be susceptible to the empiric antibiotic regimens given neutropenic patients with fevers.
Science support for the Earth radiation budget sensor on the Nimbus-7 spacecraft
NASA Technical Reports Server (NTRS)
Ingersoll, A. P.
1982-01-01
Experimental data supporting the Earth radiation budget sensor on the Nimbus 7 Satellite is given. The data deals with the empirical relations between radiative flux, cloudiness, and other meteorological parameters; response of a zonal climate ice sheet model to the orbital perturbations during the quaternary ice ages; and a simple parameterization for ice sheet ablation rate.
An Empirical Test of Oklahoma's A-F School Grades
ERIC Educational Resources Information Center
Adams, Curt M.; Forsyth, Patrick B.; Ware, Jordan; Mwavita, Mwarumba; Barnes, Laura L.; Khojasteb, Jam
2016-01-01
Oklahoma is one of 16 states electing to use an A-F letter grade as an indicator of school quality. On the surface, letter grades are an attractive policy instrument for school improvement; they are seemingly clear, simple, and easy to interpret. Evidence, however, on the use of letter grades as an instrument to rank and improve schools is scant…
ERIC Educational Resources Information Center
Talbot, Robert M., III
2017-01-01
There is a clear need for valid and reliable instrumentation that measures teacher knowledge. However, the process of investigating and making a case for instrument validity is not a simple undertaking; rather, it is a complex endeavor. This paper presents the empirical case of one aspect of such an instrument validation effort. The particular…
Mark L. Messonnier; John C. Bergstrom; Chrisopher M. Cornwell; R. Jeff Teasley; H. Ken Cordell
2000-01-01
Simple nonresponse and selection biases that may occur in survey research such as contingent valuation applications are discussed and tested. Correction mechanisms for these types of biases are demonstrated. Results indicate the importance of testing and correcting for unit and item nonresponse bias in contingent valuation survey data. When sample nonresponse and...
Microscopic study reveals the singular origins of growth
NASA Astrophysics Data System (ADS)
Yaari, G.; Nowak, A.; Rakocy, K.; Solomon, S.
2008-04-01
Anderson [Science 177, 293 (1972)] proposed the concept of complexity in order to describe the emergence and growth of macroscopic collective patterns out of the simple interactions of many microscopic agents. In the physical sciences this paradigm was implemented systematically and confirmed repeatedly by successful confrontation with reality. In the social sciences however, the possibilities to stage experiments to validate it are limited. During the 90's a series of dramatic political and economic events have provided the opportunity to do so. We exploit the resulting empirical evidence to validate a simple agent based alternative to the classical logistic dynamics. The post-liberalization empirical data from Poland confirm the theoretical prediction that the dynamics is dominated by singular rare events which insure the resilience and adaptability of the system. We have shown that growth is led by few singular “growth centers" (Fig. 1), that initially developed at a tremendous rate (Fig. 3), followed by a diffusion process to the rest of the country and leading to a positive growth rate uniform across the counties. In addition to the interdisciplinary unifying potential of our generic formal approach, the present work reveals the strong causal ties between the “softer" social conditions and their “hard" economic consequences.
2011-01-01
Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment), assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009) in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak. PMID:21269441
Eysenbach, G.; Kohler, Ch.
2003-01-01
While health information is often said to be the most sought after information on the web, empirical data on the actual frequency of health-related searches on the web are missing. In the present study we aimed to determine the prevalence of health-related searches on the web by analyzing search terms entered by people into popular search engines. We also made some preliminary attempts in qualitatively describing and classifying these searches. Occasional difficulties in determining what constitutes a “health-related” search led us to propose and validate a simple method to automatically classify a search string as “health-related”. This method is based on determining the proportion of pages on the web containing the search string and the word “health”, as a proportion of the total number of pages with the search string alone. Using human codings as gold standard we plotted a ROC curve and determined empirically that if this “co-occurance rate” is larger than 35%, the search string can be said to be health-related (sensitivity: 85.2%, specificity 80.4%). The results of our “human” codings of search queries determined that about 4.5% of all searches are “health-related”. We estimate that globally a minimum of 6.75 Million health-related searches are being conducted on the web every day, which is roughly the same number of searches that have been conducted on the NLM Medlars system in 1996 in a full year. PMID:14728167
Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.
Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S
2012-11-01
One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.
From damselflies to pterosaurs: how burst and sustainable flight performance scale with size.
Marden, J H
1994-04-01
Recent empirical data for short-burst lift and power production of flying animals indicate that mass-specific lift and power output scale independently (lift) or slightly positively (power) with increasing size. These results contradict previous theory, as well as simple observation, which argues for degradation of flight performance with increasing size. Here, empirical measures of lift and power during short-burst exertion are combined with empirically based estimates of maximum muscle power output in order to predict how burst and sustainable performance scale with body size. The resulting model is used to estimate performance of the largest extant flying birds and insects, along with the largest flying animals known from fossils. These estimates indicate that burst flight performance capacities of even the largest extinct fliers (estimated mass 250 kg) would allow takeoff from the ground; however, limitations on sustainable power output should constrain capacity for continuous flight at body sizes exceeding 0.003-1.0 kg, depending on relative wing length and flight muscle mass.
NASA Astrophysics Data System (ADS)
Chen, Dongyue; Lin, Jianhui; Li, Yanping
2018-06-01
Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.
NASA Astrophysics Data System (ADS)
Chung, Jen-Kuang
2013-09-01
A stochastic method called the random vibration theory (Boore, 1983) has been used to estimate the peak ground motions caused by shallow moderate-to-large earthquakes in the Taiwan area. Adopting Brune's ω-square source spectrum, attenuation models for PGA and PGV were derived from path-dependent parameters which were empirically modeled from about one thousand accelerograms recorded at reference sites mostly located in a mountain area and which have been recognized as rock sites without soil amplification. Consequently, the predicted horizontal peak ground motions at the reference sites, are generally comparable to these observed. A total number of 11,915 accelerograms recorded from 735 free-field stations of the Taiwan Strong Motion Network (TSMN) were used to estimate the site factors by taking the motions from the predictive models as references. Results from soil sites reveal site amplification factors of approximately 2.0 ~ 3.5 for PGA and about 1.3 ~ 2.6 for PGV. Finally, as a result of amplitude corrections with those empirical site factors, about 75% of analyzed earthquakes are well constrained in ground motion predictions, having average misfits ranging from 0.30 to 0.50. In addition, two simple indices, R 0.57 and R 0.38, are proposed in this study to evaluate the validity of intensity map prediction for public information reports. The average percentages of qualified stations for peak acceleration residuals less than R 0.57 and R 0.38 can reach 75% and 54%, respectively, for most earthquakes. Such a performance would be good enough to produce a faithful intensity map for a moderate scenario event in the Taiwan region.
NASA Astrophysics Data System (ADS)
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2014-12-01
We developed a scaling-based, simple empirical model for spatio-temporally robust prediction of the diurnal cycles of wetland net ecosystem exchange (NEE) by using an extended stochastic harmonic algorithm (ESHA). A reference-time observation from each diurnal cycle was utilized as the scaling parameter to normalize and collapse hourly observed NEE of different days into a single, dimensionless diurnal curve. The modeling concept was tested by parameterizing the unique diurnal curve and predicting hourly NEE of May to October (summer growing and fall seasons) between 2002-12 for diverse wetland ecosystems, as available in the U.S. AmeriFLUX network. As an example, the Taylor Slough short hydroperiod marsh site in the Florida Everglades had data for four consecutive growing seasons from 2009-12; results showed impressive modeling efficiency (coefficient of determination, R2 = 0.66) and accuracy (ratio of root-mean-square-error to the standard deviation of observations, RSR = 0.58). Model validation was performed with an independent year of NEE data, indicating equally impressive performance (R2 = 0.68, RSR = 0.57). The model included a parsimonious set of estimated parameters, which exhibited spatio-temporal robustness by collapsing onto narrow ranges. Model robustness was further investigated by analytically deriving and quantifying parameter sensitivity coefficients and a first-order uncertainty measure. The relatively robust, empirical NEE model can be applied for simulating continuous (e.g., hourly) NEE time-series from a single reference observation (or a set of limited observations) at different wetland sites of comparable hydro-climatology, biogeochemistry, and ecology. The method can also be used for a robust gap-filling of missing data in observed time-series of periodic ecohydrological variables for wetland or other ecosystems.
Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.
2017-01-01
Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923
NASA Astrophysics Data System (ADS)
Zhang, Rui; Newhauser, Wayne D.
2009-03-01
In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.
Chronic Fatigue Syndrome and Myalgic Encephalomyelitis: Toward An Empirical Case Definition
Jason, Leonard A.; Kot, Bobby; Sunnquist, Madison; Brown, Abigail; Evans, Meredyth; Jantke, Rachel; Williams, Yolonda; Furst, Jacob; Vernon, Suzanne D.
2015-01-01
Current case definitions of Myalgic Encephalomyelitis (ME) and chronic fatigue syndrome (CFS) have been based on consensus methods, but empirical methods could be used to identify core symptoms and thereby improve the reliability. In the present study, several methods (i.e., continuous scores of symptoms, theoretically and empirically derived cut off scores of symptoms) were used to identify core symptoms best differentiating patients from controls. In addition, data mining with decision trees was conducted. Our study found a small number of core symptoms that have good sensitivity and specificity, and these included fatigue, post-exertional malaise, a neurocognitive symptom, and unrefreshing sleep. Outcomes from these analyses suggest that using empirically selected symptoms can help guide the creation of a more reliable case definition. PMID:26029488
Rudd, M David; Joiner, Thomas; Brown, Gregory K; Cukrowicz, Kelly; Jobes, David A; Silverman, Morton
2009-12-01
A response is offered to the critiques of both Cook and VandeCreek. Among the points emphasized are the simple realities of risk with suicidal patients, existing empirical research with informed consent in both clinical psychology and other health care areas, as well as the persistence of common myths in clinical practice with suicidal patients. Although empirical science provides a firm foundation to much of what is proposed, it is critical for practitioners to recognize and respond to the ethical demands for openness and transparency with high-risk clients in an effort to achieve shared responsibility in care. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
From sparse to dense and from assortative to disassortative in online social networks
Li, Menghui; Guan, Shuguang; Wu, Chensheng; Gong, Xiaofeng; Li, Kun; Wu, Jinshan; Di, Zengru; Lai, Choy-Heng
2014-01-01
Inspired by the analysis of several empirical online social networks, we propose a simple reaction-diffusion-like coevolving model, in which individuals are activated to create links based on their states, influenced by local dynamics and their own intention. It is shown that the model can reproduce the remarkable properties observed in empirical online social networks; in particular, the assortative coefficients are neutral or negative, and the power law exponents γ are smaller than 2. Moreover, we demonstrate that, under appropriate conditions, the model network naturally makes transition(s) from assortative to disassortative, and from sparse to dense in their characteristics. The model is useful in understanding the formation and evolution of online social networks. PMID:24798703
From sparse to dense and from assortative to disassortative in online social networks.
Li, Menghui; Guan, Shuguang; Wu, Chensheng; Gong, Xiaofeng; Li, Kun; Wu, Jinshan; Di, Zengru; Lai, Choy-Heng
2014-05-06
Inspired by the analysis of several empirical online social networks, we propose a simple reaction-diffusion-like coevolving model, in which individuals are activated to create links based on their states, influenced by local dynamics and their own intention. It is shown that the model can reproduce the remarkable properties observed in empirical online social networks; in particular, the assortative coefficients are neutral or negative, and the power law exponents γ are smaller than 2. Moreover, we demonstrate that, under appropriate conditions, the model network naturally makes transition(s) from assortative to disassortative, and from sparse to dense in their characteristics. The model is useful in understanding the formation and evolution of online social networks.
An Empirical Model of the Variation of the Solar Lyman-α Spectral Irradiance
NASA Astrophysics Data System (ADS)
Kretzschmar, Matthieu; Snow, Martin; Curdt, Werner
2018-03-01
We propose a simple model that computes the spectral profile of the solar irradiance in the hydrogen Lyman alpha line, H Ly-α (121.567 nm), from 1947 to present. Such a model is relevant for the study of many astronomical environments, from planetary atmospheres to interplanetary medium. This empirical model is based on the SOlar Heliospheric Observatory/Solar Ultraviolet Measurement of Emitted Radiation observations of the Ly-α irradiance over solar cycle 23 and the Ly-α disk-integrated irradiance composite. The model reproduces the temporal variability of the spectral profile and matches the independent SOlar Radiation and Climate Experiment/SOLar-STellar Irradiance Comparison Experiment spectral observations from 2003 to 2007 with an accuracy better than 10%.
NASA Astrophysics Data System (ADS)
Klemz, Francis B.
Forging provides an elegant solution to the problem of producing complicated shapes from heated metal. This study attempts to relate some of the important parameters involved when considering, simple upsetting, closed die forging and extrusion forging.A literature survey showed some of the empirical graphical and statistical methods of load prediction together with analytical methods of estimating load and energy. Investigations of the effects of high strain rate and temperature on the stress-strain properties of materials are also evident.In the present study special equipment including an experimental drop hammer and various die-sets have been designed and manufactured. Instrumentation to measure load/time and displacement/time behaviour, of the deformed metal, has been incorporated and calibrated. A high speed camera was used to record the behaviour mode of test pieces used in the simple upsetting tests.Dynamic and quasi-static material properties for the test materials, lead and aluminium alloy, were measured using the drop-hammer and a compression-test machine.Analytically two separate mathematical solutions have been developed: A numerical technique using a lumped-massmodel for the analysis of simple upsetting and closed-die forging and, for extrusion forging, an analysis which equates the shear and compression energy requirements tothe work done by the forging load.Cylindrical test pieces were used for all the experiments and both dry and lubricated test conditions were investigated. The static and dynamic tests provide data on Load, Energy and the Profile of the deformed billet. In addition for the Extrusion Forging, both single ended and double ended tests were conducted. Material dependency was also examined by a further series of tests on aluminium and copper.Comparison of the experimental and theoretical results was made which shows clearly the effects of friction and high strain rate on load and energy requirements and the deformation mode of the billet. For the axisymmetric shapes considered, it was found that the load, energy requirement and profile could be predicted with reasonable accuracy.
Generalized fictitious methods for fluid-structure interactions: Analysis and simulations
NASA Astrophysics Data System (ADS)
Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em
2013-07-01
We present a new fictitious pressure method for fluid-structure interaction (FSI) problems in incompressible flow by generalizing the fictitious mass and damping methods we published previously in [1]. The fictitious pressure method involves modification of the fluid solver whereas the fictitious mass and damping methods modify the structure solver. We analyze all fictitious methods for simplified problems and obtain explicit expressions for the optimal reduction factor (convergence rate index) at the FSI interface [2]. This analysis also demonstrates an apparent similarity of fictitious methods to the FSI approach based on Robin boundary conditions, which have been found to be very effective in FSI problems. We implement all methods, including the semi-implicit Robin based coupling method, in the context of spectral element discretization, which is more sensitive to temporal instabilities than low-order methods. However, the methods we present here are simple and general, and hence applicable to FSI based on any other spatial discretization. In numerical tests, we verify the selection of optimal values for the fictitious parameters for simplified problems and for vortex-induced vibrations (VIV) even at zero mass ratio ("for-ever-resonance"). We also develop an empirical a posteriori analysis for complex geometries and apply it to 3D patient-specific flexible brain arteries with aneurysms for very large deformations. We demonstrate that the fictitious pressure method enhances stability and convergence, and is comparable or better in most cases to the Robin approach or the other fictitious methods.
Preequating with Empirical Item Characteristic Curves: An Observed-Score Preequating Method
ERIC Educational Resources Information Center
Zu, Jiyun; Puhan, Gautam
2014-01-01
Preequating is in demand because it reduces score reporting time. In this article, we evaluated an observed-score preequating method: the empirical item characteristic curve (EICC) method, which makes preequating without item response theory (IRT) possible. EICC preequating results were compared with a criterion equating and with IRT true-score…
A Comparison of Two Scoring Methods for an Automated Speech Scoring System
ERIC Educational Resources Information Center
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David
2012-01-01
This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…
From empirical Bayes to full Bayes : methods for analyzing traffic safety data.
DOT National Transportation Integrated Search
2004-10-24
Traffic safety engineers are among the early adopters of Bayesian statistical tools for : analyzing crash data. As in many other areas of application, empirical Bayes methods were : their first choice, perhaps because they represent an intuitively ap...
Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan
2013-01-01
Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.
xEMD procedures as a data - Assisted filtering method
NASA Astrophysics Data System (ADS)
Machrowska, Anna; Jonak, Józef
2018-01-01
The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.
Probabilistic analysis of tsunami hazards
Geist, E.L.; Parsons, T.
2006-01-01
Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).
Less can be more: How to make operations more flexible and robust with fewer resources
NASA Astrophysics Data System (ADS)
Haksöz, ćaǧrı; Katsikopoulos, Konstantinos; Gigerenzer, Gerd
2018-06-01
We review empirical evidence from practice and general theoretical conditions, under which simple rules of thumb can help to make operations flexible and robust. An operation is flexible when it responds adaptively to adverse events such as natural disasters; an operation is robust when it is less affected by adverse events in the first place. We illustrate the relationship between flexibility and robustness in the context of supply chain risk. In addition to increasing flexibility and robustness, simple rules simultaneously reduce the need for resources such as time, money, information, and computation. We illustrate the simple-rules approach with an easy-to-use graphical aid for diagnosing and managing supply chain risk. More generally, we recommend a four-step process for determining the amount of resources that decision makers should invest in so as to increase flexibility and robustness.
Ultrametric distribution of culture vectors in an extended Axelrod model of cultural dissemination.
Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael
2014-05-02
The Axelrod model of cultural diffusion is an apparently simple model that is capable of complex behaviour. A recent work used a real-world dataset of opinions as initial conditions, demonstrating the effects of the ultrametric distribution of empirical opinion vectors in promoting cultural diversity in the model. Here we quantify the degree of ultrametricity of the initial culture vectors and investigate the effect of varying degrees of ultrametricity on the absorbing state of both a simple and extended model. Unlike the simple model, ultrametricity alone is not sufficient to sustain long-term diversity in the extended Axelrod model; rather, the initial conditions must also have sufficiently large variance in intervector distances. Further, we find that a scheme for evolving synthetic opinion vectors from cultural "prototypes" shows the same behaviour as real opinion data in maintaining cultural diversity in the extended model; whereas neutral evolution of cultural vectors does not.
Ultrametric distribution of culture vectors in an extended Axelrod model of cultural dissemination
NASA Astrophysics Data System (ADS)
Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael
2014-05-01
The Axelrod model of cultural diffusion is an apparently simple model that is capable of complex behaviour. A recent work used a real-world dataset of opinions as initial conditions, demonstrating the effects of the ultrametric distribution of empirical opinion vectors in promoting cultural diversity in the model. Here we quantify the degree of ultrametricity of the initial culture vectors and investigate the effect of varying degrees of ultrametricity on the absorbing state of both a simple and extended model. Unlike the simple model, ultrametricity alone is not sufficient to sustain long-term diversity in the extended Axelrod model; rather, the initial conditions must also have sufficiently large variance in intervector distances. Further, we find that a scheme for evolving synthetic opinion vectors from cultural ``prototypes'' shows the same behaviour as real opinion data in maintaining cultural diversity in the extended model; whereas neutral evolution of cultural vectors does not.
An Inductive Logic Programming Approach to Validate Hexose Binding Biochemical Knowledge.
Nassif, Houssam; Al-Ali, Hassan; Khuri, Sawsan; Keirouz, Walid; Page, David
2010-01-01
Hexoses are simple sugars that play a key role in many cellular pathways, and in the regulation of development and disease mechanisms. Current protein-sugar computational models are based, at least partially, on prior biochemical findings and knowledge. They incorporate different parts of these findings in predictive black-box models. We investigate the empirical support for biochemical findings by comparing Inductive Logic Programming (ILP) induced rules to actual biochemical results. We mine the Protein Data Bank for a representative data set of hexose binding sites, non-hexose binding sites and surface grooves. We build an ILP model of hexose-binding sites and evaluate our results against several baseline machine learning classifiers. Our method achieves an accuracy similar to that of other black-box classifiers while providing insight into the discriminating process. In addition, it confirms wet-lab findings and reveals a previously unreported Trp-Glu amino acids dependency.
Improved color constancy in honey bees enabled by parallel visual projections from dorsal ocelli.
Garcia, Jair E; Hung, Yu-Shan; Greentree, Andrew D; Rosa, Marcello G P; Endler, John A; Dyer, Adrian G
2017-07-18
How can a pollinator, like the honey bee, perceive the same colors on visited flowers, despite continuous and rapid changes in ambient illumination and background color? A hundred years ago, von Kries proposed an elegant solution to this problem, color constancy, which is currently incorporated in many imaging and technological applications. However, empirical evidence on how this method can operate on animal brains remains tenuous. Our mathematical modeling proposes that the observed spectral tuning of simple ocellar photoreceptors in the honey bee allows for the necessary input for an optimal color constancy solution to most natural light environments. The model is fully supported by our detailed description of a neural pathway allowing for the integration of signals originating from the ocellar photoreceptors to the information processing regions in the bee brain. These findings reveal a neural implementation to the classic color constancy problem that can be easily translated into artificial color imaging systems.
Flood mapping in ungauged basins using fully continuous hydrologic-hydraulic modeling
NASA Astrophysics Data System (ADS)
Grimaldi, Salvatore; Petroselli, Andrea; Arcangeletti, Ettore; Nardi, Fernando
2013-04-01
SummaryIn this work, a fully-continuous hydrologic-hydraulic modeling framework for flood mapping is introduced and tested. It is characterized by a simulation of a long rainfall time series at sub-daily resolution that feeds a continuous rainfall-runoff model producing a discharge time series that is directly given as an input to a bi-dimensional hydraulic model. The main advantage of the proposed approach is to avoid the use of the design hyetograph and the design hydrograph that constitute the main source of subjective analysis and uncertainty for standard methods. The proposed procedure is optimized for small and ungauged watersheds where empirical models are commonly applied. Results of a simple real case study confirm that this experimental fully-continuous framework may pave the way for the implementation of a less subjective and potentially automated procedure for flood hazard mapping.
Accurate van der Waals coefficients from density functional theory
Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn
2012-01-01
The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765
New interatomic potential for Mg–Al–Zn alloys with specific application to dilute Mg-based alloys
NASA Astrophysics Data System (ADS)
Dickel, Doyl E.; Baskes, Michael I.; Aslam, Imran; Barrett, Christopher D.
2018-06-01
Because of its very large c/a ratio, zinc has proven to be a difficult element to model using semi-empirical classical potentials. It has been shown, in particular, that for the modified embedded atom method (MEAM), a potential cannot simultaneously have an hcp ground state and c/a ratio greater than ideal. As an alloying element, however, useful zinc potentials can be generated by relaxing the condition that hcp be the lowest energy structure. In this paper, we present a MEAM zinc potential, which gives accurate material properties for the pure state, as well as a MEAM ternary potential for the Mg–Al–Zn system which will allow the atomistic modeling of a wide class of alloys containing zinc. The effects of zinc in simple Mg–Zn for this potential is demonstrated and these results verify the accuracy for the new potential in these systems.
High fidelity studies of exploding foil initiator bridges, Part 3: ALEGRA MHD simulations
NASA Astrophysics Data System (ADS)
Neal, William; Garasi, Christopher
2017-01-01
Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, and predict a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this third paper of a three part study, the experimental results presented in part 2 are compared against 3-dimensional MHD simulations. This improved experimental capability, along with advanced simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.
High fidelity studies of exploding foil initiator bridges, Part 2: Experimental results
NASA Astrophysics Data System (ADS)
Neal, William; Bowden, Mike
2017-01-01
Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA MHD, it is now possible to simulate these components in three dimensions and predict greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this second paper of a three part study, data is presented from a flexible foil EFI header experiment. This study has shown that there is significant bridge expansion before time of peak voltage and that heating within the bridge material is spatially affected by the microstructure of the metal foil.
With a Little Help from My Friends: Group Orientation by Larvae of a Coral Reef Fish
Irisson, Jean-Olivier; Paris, Claire B.; Leis, Jeffrey M.; Yerman, Michelle N.
2015-01-01
Theory and some empirical evidence suggest that groups of animals orient better than isolated individuals. We present the first test of this hypothesis for pelagic marine larvae, at the stage of settlement, when orientation is critical to find a habitat. We compare the in situ behaviour of individuals and groups of 10–12 Chromis atripectoralis (reef fish of the family Pomacentridae), off Lizard Island, Great Barrier Reef. Larvae are observed by divers or with a drifting image recording device. With both methods, groups orient cardinally while isolated individuals do not display significant orientation. Groups also swim on a 15% straighter course (i.e. are better at keeping a bearing) and 7% faster than individuals. A body of observations collected in this study suggest that enhanced group orientation emerges from simple group dynamics rather than from the presence of more skilful leaders. PMID:26625164
Zipf's word frequency law in natural language: a critical review and future directions.
Piantadosi, Steven T
2014-10-01
The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.
Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.
Jeschek, Markus; Gerngross, Daniel; Panke, Sven
2016-03-31
Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.
Modified reactive tabu search for the symmetric traveling salesman problems
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Hong, Pei-Yee; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Reactive tabu search (RTS) is an improved method of tabu search (TS) and it dynamically adjusts tabu list size based on how the search is performed. RTS can avoid disadvantage of TS which is in the parameter tuning in tabu list size. In this paper, we proposed a modified RTS approach for solving symmetric traveling salesman problems (TSP). The tabu list size of the proposed algorithm depends on the number of iterations when the solutions do not override the aspiration level to achieve a good balance between diversification and intensification. The proposed algorithm was tested on seven chosen benchmarked problems of symmetric TSP. The performance of the proposed algorithm is compared with that of the TS by using empirical testing, benchmark solution and simple probabilistic analysis in order to validate the quality of solution. The computational results and comparisons show that the proposed algorithm provides a better quality solution than that of the TS.
Colloquium: Hierarchy of scales in language dynamics
NASA Astrophysics Data System (ADS)
Blythe, Richard A.
2015-11-01
Methods and insights from statistical physics are finding an increasing variety of applications where one seeks to understand the emergent properties of a complex interacting system. One such area concerns the dynamics of language at a variety of levels of description, from the behaviour of individual agents learning simple artificial languages from each other, up to changes in the structure of languages shared by large groups of speakers over historical timescales. In this Colloquium, we survey a hierarchy of scales at which language and linguistic behaviour can be described, along with the main progress in understanding that has been made at each of them - much of which has come from the statistical physics community. We argue that future developments may arise by linking the different levels of the hierarchy together in a more coherent fashion, in particular where this allows more effective use of rich empirical data sets.
High Speed Jet Noise Prediction Using Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Lele, Sanjiva K.
2002-01-01
Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.
Dawson, Michael R W; Dupuis, Brian; Spetch, Marcia L; Kelly, Debbie M
2009-08-01
The matching law (Herrnstein 1961) states that response rates become proportional to reinforcement rates; this is related to the empirical phenomenon called probability matching (Vulkan 2000). Here, we show that a simple artificial neural network generates responses consistent with probability matching. This behavior was then used to create an operant procedure for network learning. We use the multiarmed bandit (Gittins 1989), a classic problem of choice behavior, to illustrate that operant training balances exploiting the bandit arm expected to pay off most frequently with exploring other arms. Perceptrons provide a medium for relating results from neural networks, genetic algorithms, animal learning, contingency theory, reinforcement learning, and theories of choice.
Real-time restoration of white-light confocal microscope optical sections
Balasubramanian, Madhusudhanan; Iyengar, S. Sitharama; Beuerman, Roger W.; Reynaud, Juan; Wolenski, Peter
2009-01-01
Confocal microscopes (CM) are routinely used for building 3-D images of microscopic structures. Nonideal imaging conditions in a white-light CM introduce additive noise and blur. The optical section images need to be restored prior to quantitative analysis. We present an adaptive noise filtering technique using Karhunen–Loéve expansion (KLE) by the method of snapshots and a ringing metric to quantify the ringing artifacts introduced in the images restored at various iterations of iterative Lucy–Richardson deconvolution algorithm. The KLE provides a set of basis functions that comprise the optimal linear basis for an ensemble of empirical observations. We show that most of the noise in the scene can be removed by reconstructing the images using the KLE basis vector with the largest eigenvalue. The prefiltering scheme presented is faster and does not require prior knowledge about image noise. Optical sections processed using the KLE prefilter can be restored using a simple inverse restoration algorithm; thus, the methodology is suitable for real-time image restoration applications. The KLE image prefilter outperforms the temporal-average prefilter in restoring CM optical sections. The ringing metric developed uses simple binary morphological operations to quantify the ringing artifacts and confirms with the visual observation of ringing artifacts in the restored images. PMID:20186290
ERIC Educational Resources Information Center
Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan
2008-01-01
Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…
Marcus V. Warwell; Gerald E. Rehfeldt; Nicholas L. Crookston
2006-01-01
The Random Forests multiple regression tree was used to develop an empirically-based bioclimate model for the distribution of Pinus albicaulis (whitebark pine) in western North America, latitudes 31° to 51° N and longitudes 102° to 125° W. Independent variables included 35 simple expressions of temperature and precipitation and their interactions....
Quantifying Confidence in Model Predictions for Hypersonic Aircraft Structures
2015-03-01
of isolating calibrations of models in the network, segmented and simultaneous calibration are compared using the Kullback - Leibler ...value of θ. While not all test -statistics are as simple as measuring goodness or badness of fit , their directional interpretations tend to remain...data quite well, qualitatively. Quantitative goodness - of - fit tests are problematic because they assume a true empirical CDF is being tested or
Two Sides of the Same Coin: U. S. "Residual" Inequality and the Gender Gap
ERIC Educational Resources Information Center
Bacolod, Marigee P.; Blum, Bernardo S.
2010-01-01
We show that the narrowing gender gap and the growth in earnings inequality are consistent with a simple model in which skills are heterogeneous, and the growth in skill prices has been particularly strong for skills with which women are well endowed. Empirical analysis of DOT, CPS, and NLSY79 data finds evidence to support this model. A large…
Deriving local demand for stumpage from estimates of regional supply and demand.
Kent P. Connaughton; Gerard A. Majerus; David H. Jackson
1989-01-01
The local (Forest-level or local-area) demand for stumpage can be derived from estimates of regional supply and demand. The derivation of local demand is justified when the local timber economy is similar to the regional timber economy; a simple regression of local on nonlocal prices can be used as an empirical test of similarity between local and regional economies....
Phase transition in tumor growth: I avascular development
NASA Astrophysics Data System (ADS)
Izquierdo-Kulich, E.; Rebelo, I.; Tejera, E.; Nieto-Villar, J. M.
2013-12-01
We propose a mechanism for avascular tumor growth based on a simple chemical network. This model presents a logistic behavior and shows a “second order” phase transition. We prove the fractal origin of the empirical logistics and Gompertz constant and its relation to mitosis and apoptosis rate. Finally, the thermodynamics framework developed demonstrates the entropy production rate as a Lyapunov function during avascular tumor growth.
Entropy of mixing calculations for compound forming liquid alloys in the hard sphere system
NASA Astrophysics Data System (ADS)
Singh, P.; Khanna, K. N.
1984-06-01
It is shown that the semi-empirical model proposed in a previous paper for the evaluation of the entropy of mixing of simple liquid metals alloys leads to accurate results for compound forming liquid alloys. The procedure is similar to that described for a regular solution. Numerical applications are made to NaGa, KPb and KT1 alloys.
NASA Technical Reports Server (NTRS)
Lebedeff, S. A.; Hameed, S.
1975-01-01
The problem investigated can be solved exactly in a simple manner if the equations are written in terms of a similarity variable. The exact solution is used to explore two questions of interest in the modelling of urban air pollution, taking into account the distribution of surface concentration downwind of an area source and the distribution of concentration with height.
ERIC Educational Resources Information Center
Ningling, Wei
2015-01-01
Specificity, as a dimension of cognitive construal, refers to the capacity of a speaker to describe an entity or a situation in different accuracy and details (Langacker, 2008), which is linguistically reflected in lexical and grammatical levels (Wen, 2012). Modifiers can extend a simple sentence into a long and complicated one (Weng, 2007),…
NASA Astrophysics Data System (ADS)
Peñaloza-Murillo, Marcos A.; Pasachoff, Jay M.
2015-04-01
We analyze mathematically air temperature measurements made near the ground by the Williams College expedition to observe the first total occultation of the Sun [TOS (commonly known as a total solar eclipse)] of the 21st century in Lusaka, Zambia, in the afternoon of June 21, 2001. To do so, we have revisited some earlier and contemporary methods to test their usefulness for this analysis. Two of these methods, based on a radiative scheme for solar radiation modeling and that has been originally applied to a morning occultation, have successfully been combined to obtain the delay function for an afternoon occultation, via derivation of the so-called instantaneous temperature profiles. For this purpose, we have followed the suggestion given by the third of these previously applied methods to calculate this function, although by itself it failed to do so at least for this occultation. The analysis has taken into account the limb-darkening, occultation and obscuration functions. The delay function obtained describes quite fairly the lag between the solar radiation variation and the delayed air temperature measured. Also, in this investigation, a statistical study has been carried out to get information on the convection activity produced during this event. For that purpose, the fluctuations generated by turbulence has been studied by analyzing variance and residuals. The results, indicating an irreversible steady decrease of this activity, are consistent with those published by other studies. Finally, the air temperature drop due to this event is well estimated by applying the empirical scheme given by the fourth of the previously applied methods, based on the daily temperature amplitude and the standardized middle time of the occultation. It is demonstrated then that by using a simple set of air temperature measurements obtained during solar occultations, along with some supplementary data, a simple mathematical analysis can be achieved by applying of the four methods reviewed here.
A high-resolution conceptual model for diffuse organic micropollutant loads in streams
NASA Astrophysics Data System (ADS)
Stamm, Christian; Honti, Mark; Ghielmetti, Nico
2013-04-01
The ecological state of surface waters has become the dominant aspect in water quality assessments. Toxicity is a key determinant of the ecological state, but organic micropollutants (OMP) are seldom monitored with the same spatial and temporal frequency as for example nutrients, mainly due the demanding analytical methods and costs. However, diffuse transport pathways are at least equally complex for OMPs as for nutrients and there are still significant knowledge gaps. Moreover, concentrations of the different compounds would need to be known with fairly high temporal resolution because acute toxicity can be as important as the chronic one. Fully detailed mechanistic models of diffuse OMP loads require an immense set of site-specific knowledge and are rarely applicable for catchments lacking an exceptional monitoring coverage. Simple empirical methods are less demanding but usually work with more temporal aggregation and that's why they have limited possibilities to support the estimation of the ecological state. This study presents a simple conceptual model that aims to simulate the concentrations of selected organic micropollutants with daily resolution at 11 locations in the stream network of a small catchment (46 km2). The prerequisite is a known hydrological and meteorological background (daily discharge, precipitation and air temperature time series), a land use map and some historic measurements of the desired compounds. The model is conceptual in the sense that all important diffuse transport pathways are simulated separately, but each with a simple empirical process rate. Consequently, some site-specific observations are required to calibrate the model, but afterwards the model can be used for forecasting and scenario analysis as the calibrated process rates typically describe invariant properties of the catchment. We simulated 6 different OMPs from the categories of agricultural and urban pesticides and urban biocides. The application of agricultural pesticides was also simulated with the model using a heat-sum approach. Calibration was carried out with weekly aggregated samples covering the growing season in 2 years. The model could reproduce the observed OMP concentrations with varying success. Compounds that are less persistent in the environment and thus have a dominant temporal dynamics (pesticides with a short half-life) could be simulated in general better than the persistent ones. For the latter group the relatively stable available stock meant that there were no clear seasonal dynamics, which revealed that transport processes are quite uncertain even when daily rainfall is used as the main driver. Nevertheless the daily concentration distribution could still be simulated with higher accuracy than the individual peaks. Thus we can model the concentration-duration relationship for daily resolution in an acceptable way for each compound.
Airspace Dimension Assessment with nanoparticles reflects lung density as quantified by MRI
Jakobsson, Jonas K; Löndahl, Jakob; Olsson, Lars E; Diaz, Sandra; Zackrisson, Sophia; Wollmer, Per
2018-01-01
Background Airspace Dimension Assessment with inhaled nanoparticles is a novel method to determine distal airway morphology. This is the first empirical study using Airspace Dimension Assessment with nanoparticles (AiDA) to estimate distal airspace radius. The technology is relatively simple and potentially accessible in clinical outpatient settings. Method Nineteen never-smoking volunteers performed nanoparticle inhalation tests at multiple breath-hold times, and the difference in nanoparticle concentration of inhaled and exhaled gas was measured. An exponential decay curve was fitted to the concentration of recovered nanoparticles, and airspace dimensions were assessed from the half-life of the decay. Pulmonary tissue density was measured using magnetic resonance imaging (MRI). Results The distal airspace radius measured by AiDA correlated with lung tissue density as measured by MRI (ρ = −0.584; p = 0.0086). The linear intercept of the logarithm of the exponential decay curve correlated with forced expiratory volume in one second (FEV1) (ρ = 0.549; p = 0.0149). Conclusion The AiDA method shows potential to be developed into a tool to assess conditions involving changes in distal airways, eg, emphysema. The intercept may reflect airway properties; this finding should be further investigated.
Trends and associated uncertainty in the global mean temperature record
NASA Astrophysics Data System (ADS)
Poppick, A. N.; Moyer, E. J.; Stein, M.
2016-12-01
Physical models suggest that the Earth's mean temperature warms in response to changing CO2 concentrations (and hence increased radiative forcing); given physical uncertainties in this relationship, the historical temperature record is a source of empirical information about global warming. A persistent thread in many analyses of the historical temperature record, however, is the reliance on methods that appear to deemphasize both physical and statistical assumptions. Examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for natural variability in nonparametric rather than parametric ways. We show here that methods that deemphasize assumptions can limit the scope of analysis and can lead to misleading inferences, particularly in the setting considered where the data record is relatively short and the scale of temporal correlation is relatively long. A proposed model that is simple but physically informed provides a more reliable estimate of trends and allows a broader array of questions to be addressed. In accounting for uncertainty, we also illustrate how parametric statistical models that are attuned to the important characteristics of natural variability can be more reliable than ostensibly more flexible approaches.
An approach for modeling sediment budgets in supply-limited rivers
Wright, Scott A.; Topping, David J.; Rubin, David M.; Melis, Theodore S.
2010-01-01
Reliable predictions of sediment transport and river morphology in response to variations in natural and human-induced drivers are necessary for river engineering and management. Because engineering and management applications may span a wide range of space and time scales, a broad spectrum of modeling approaches has been developed, ranging from suspended-sediment "rating curves" to complex three-dimensional morphodynamic models. Suspended sediment rating curves are an attractive approach for evaluating changes in multi-year sediment budgets resulting from changes in flow regimes because they are simple to implement, computationally efficient, and the empirical parameters can be estimated from quantities that are commonly measured in the field (i.e., suspended sediment concentration and water discharge). However, the standard rating curve approach assumes a unique suspended sediment concentration for a given water discharge. This assumption is not valid in rivers where sediment supply varies enough to cause changes in particle size or changes in areal coverage of sediment on the bed; both of these changes cause variations in suspended sediment concentration for a given water discharge. More complex numerical models of hydraulics and morphodynamics have been developed to address such physical changes of the bed. This additional complexity comes at a cost in terms of computations as well as the type and amount of data required for model setup, calibration, and testing. Moreover, application of the resulting sediment-transport models may require observations of bed-sediment boundary conditions that require extensive (and expensive) observations or, alternatively, require the use of an additional model (subject to its own errors) merely to predict the bed-sediment boundary conditions for use by the transport model. In this paper we present a hybrid approach that combines aspects of the rating curve method and the more complex morphodynamic models. Our primary objective was to develop an approach complex enough to capture the processes related to sediment supply limitation but simple enough to allow for rapid calculations of multi-year sediment budgets. The approach relies on empirical relations between suspended sediment concentration and discharge but on a particle size specific basis and also tracks and incorporates the particle size distribution of the bed sediment. We have applied this approach to the Colorado River below Glen Canyon Dam (GCD), a reach that is particularly suited to such an approach because it is substantially sediment supply limited such that transport rates are strongly dependent on both water discharge and sediment supply. The results confirm the ability of the approach to simulate the effects of supply limitation, including periods of accumulation and bed fining as well as erosion and bed coarsening, using a very simple formulation. Although more empirical in nature than standard one-dimensional morphodynamic models, this alternative approach is attractive because its simplicity allows for rapid evaluation of multi-year sediment budgets under a range of flow regimes and sediment supply conditions, and also because it requires substantially less data for model setup and use.
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.
A. Bytnerowicz; R.F. Johnson; L. Zhang; G.D. Jenerette; M.E. Fenn; S.L. Schilling; I. Gonzalez-Fernandez
2015-01-01
The empirical inferential method (EIM) allows for spatially and temporally-dense estimates of atmospheric nitrogen (N) deposition to Mediterranean ecosystems. This method, set within a GIS platform, is based on ambient concentrations of NH3, NO, NO2 and HNO3; surface conductance of NH4...
A New Sample Size Formula for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…
Testing a single regression coefficient in high dimensional linear models
Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2017-01-01
In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668
Testing a single regression coefficient in high dimensional linear models.
Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2016-11-01
In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.
Feller processes: the next generation in modeling. Brownian motion, Lévy processes and beyond.
Böttcher, Björn
2010-12-03
We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes.
Feller Processes: The Next Generation in Modeling. Brownian Motion, Lévy Processes and Beyond
Böttcher, Björn
2010-01-01
We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular Brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes. PMID:21151931
Analysis of swarm behaviors based on an inversion of the fluctuation theorem.
Hamann, Heiko; Schmickl, Thomas; Crailsheim, Karl
2014-01-01
A grand challenge in the field of artificial life is to find a general theory of emergent self-organizing systems. In swarm systems most of the observed complexity is based on motion of simple entities. Similarly, statistical mechanics focuses on collective properties induced by the motion of many interacting particles. In this article we apply methods from statistical mechanics to swarm systems. We try to explain the emergent behavior of a simulated swarm by applying methods based on the fluctuation theorem. Empirical results indicate that swarms are able to produce negative entropy within an isolated subsystem due to frozen accidents. Individuals of a swarm are able to locally detect fluctuations of the global entropy measure and store them, if they are negative entropy productions. By accumulating these stored fluctuations over time the swarm as a whole is producing negative entropy and the system ends up in an ordered state. We claim that this indicates the existence of an inverted fluctuation theorem for emergent self-organizing dissipative systems. This approach bears the potential of general applicability.
Miguel, Elizabeth L M; Silva, Poliana L; Pliego, Josefredo R
2014-05-29
Methanol is a widely used solvent for chemical reactions and has solvation properties similar to those of water. However, the performance of continuum solvation models in this solvent has not been tested yet. In this report, we have investigated the performance of the SM8 and SMD models for pKa prediction of 26 carboxylic acids, 24 phenols, and 23 amines in methanol. The gas phase contribution was included at the X3LYP/TZVPP+diff//X3LYP/DZV+P(d) level. Using the proton exchange reaction with acetic acid, phenol, and ammonia as reference species leads to RMS error in the range of 1.4 to 3.6 pKa units. This finding suggests that the performance of the continuum models for methanol is similar to that found for aqueous solvent. Application of simple empirical correction through a linear equation leads to accurate pKa prediction, with uncertainty less than 0.8 units with the SM8 method. Testing with the less expensive PBE1PBE/6-311+G** method results in a slight improvement in the results.
NASA Astrophysics Data System (ADS)
Droghei, Riccardo; Salusti, Ettore
2013-04-01
Control of drilling parameters, as fluid pressure, mud weight, salt concentration is essential to avoid instabilities when drilling through shale sections. To investigate shale deformation, fundamental for deep oil drilling and hydraulic fracturing for gas extraction ("fracking"), a non-linear model of mechanic and chemo-poroelastic interactions among fluid, solute and the solid matrix is here discussed. The two equations of this model describe the isothermal evolution of fluid pressure and solute density in a fluid saturated porous rock. Their solutions are quick non-linear Burger's solitary waves, potentially destructive for deep operations. In such analysis the effect of diffusion, that can play a particular role in fracking, is investigated. Then, following Civan (1998), both diffusive and shock waves are applied to fine particles filtration due to such quick transients , their effect on the adjacent rocks and the resulting time-delayed evolution. Notice how time delays in simple porous media dynamics have recently been analyzed using a fractional derivative approach. To make a tentative comparison of these two deeply different methods,in our model we insert fractional time derivatives, i.e. a kind of time-average of the fluid-rocks interactions. Then the delaying effects of fine particles filtration is compared with fractional model time delays. All this can be seen as an empirical check of these fractional models.
NASA Astrophysics Data System (ADS)
Iwaki, A.; Fujiwara, H.
2012-12-01
Broadband ground motion computations of scenario earthquakes are often based on hybrid methods that are the combinations of deterministic approach in lower frequency band and stochastic approach in higher frequency band. Typical computation methods for low-frequency and high-frequency (LF and HF, respectively) ground motions are the numerical simulations, such as finite-difference and finite-element methods based on three-dimensional velocity structure model, and the stochastic Green's function method, respectively. In such hybrid methods, LF and HF wave fields are generated through two different methods that are completely independent of each other, and are combined at the matching frequency. However, LF and HF wave fields are essentially not independent as long as they are from the same event. In this study, we focus on the relation among acceleration envelopes at different frequency bands, and attempt to synthesize HF ground motion using the information extracted from LF ground motion, aiming to propose a new method for broad-band strong motion prediction. Our study area is Kanto area, Japan. We use the K-NET and KiK-net surface acceleration data and compute RMS envelope at four frequency bands: 0.5-1.0 Hz, 1.0-2.0 Hz, 2.0-4.0 Hz, .0-8.0 Hz, and 8.0-16.0 Hz. Taking the ratio of the envelopes of adjacent bands, we find that the envelope ratios have stable shapes at each site. The empirical envelope-ratio characteristics are combined with low-frequency envelope of the target earthquake to synthesize HF ground motion. We have applied the method to M5-class earthquakes and a M7 target earthquake that occurred in the vicinity of Kanto area, and successfully reproduced the observed HF ground motion of the target earthquake. The method can be applied to a broad band ground motion simulation for a scenario earthquake by combining numerically-computed low-frequency (~1 Hz) ground motion with the empirical envelope ratio characteristics to generate broadband ground motion. The strengths of the proposed method are that: 1) it is based on observed ground motion characteristics, 2) it takes full advantage of precise velocity structure model, and 3) it is simple and easy to apply.
Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel
2012-01-01
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Harpold, Adrian A.; Burns, Douglas A.; Walter, M.T.; Steenhuis, Tammo S.
2013-01-01
Describing the distribution of aquatic habitats and the health of biological communities can be costly and time-consuming; therefore, simple, inexpensive methods to scale observations of aquatic biota to watersheds that lack data would be useful. In this study, we explored the potential of a simple “hydrogeomorphic” model to predict the effects of acid deposition on macroinvertebrate, fish, and diatom communities in 28 sub-watersheds of the 176-km2 Neversink River basin in the Catskill Mountains of New York State. The empirical model was originally developed to predict stream-water acid neutralizing capacity (ANC) using the watershed slope and drainage density. Because ANC is known to be strongly related to aquatic biological communities in the Neversink, we speculated that the model might correlate well with biotic indicators of ANC response. The hydrogeomorphic model was strongly correlated to several measures of macroinvertebrate and fish community richness and density, but less strongly correlated to diatom acid tolerance. The model was also strongly correlated to biological communities in 18 sub-watersheds independent of the model development, with the linear correlation capturing the strongly acidic nature of small upland watersheds (2). Overall, we demonstrated the applicability of geospatial data sets and a simple hydrogeomorphic model for estimating aquatic biological communities in areas with stream-water acidification, allowing estimates where no direct field observations are available. Similar modeling approaches have the potential to complement or refine expensive and time-consuming measurements of aquatic biota populations and to aid in regional assessments of aquatic health.
Robust diffraction correction method for high-frequency ultrasonic tissue characterization
NASA Astrophysics Data System (ADS)
Raju, Balasundar
2004-05-01
The computation of quantitative ultrasonic parameters such as the attenuation or backscatter coefficient requires compensation for diffraction effects. In this work a simple and accurate diffraction correction method for skin characterization requiring only a single focal zone is developed. The advantage of this method is that the transducer need not be mechanically repositioned to collect data from several focal zones, thereby reducing the time of imaging and preventing motion artifacts. Data were first collected under controlled conditions from skin of volunteers using a high-frequency system (center frequency=33 MHz, BW=28 MHz) at 19 focal zones through axial translation. Using these data, mean backscatter power spectra were computed as a function of the distance between the transducer and the tissue, which then served as empirical diffraction correction curves for subsequent data. The method was demonstrated on patients patch-tested for contact dermatitis. The computed attenuation coefficient slope was significantly (p<0.05) lower at the affected site (0.13+/-0.02 dB/mm/MHz) compared to nearby normal skin (0.2+/-0.05 dB/mm/MHz). The mean backscatter level was also significantly lower at the affected site (6.7+/-2.1 in arbitrary units) compared to normal skin (11.3+/-3.2). These results show diffraction corrected ultrasonic parameters can differentiate normal from affected skin tissues.
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-01-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107
Improving Predictions of Multiple Binary Models in ILP
2014-01-01
Despite the success of ILP systems in learning first-order rules from small number of examples and complexly structured data in various domains, they struggle in dealing with multiclass problems. In most cases they boil down a multiclass problem into multiple black-box binary problems following the one-versus-one or one-versus-rest binarisation techniques and learn a theory for each one. When evaluating the learned theories of multiple class problems in one-versus-rest paradigm particularly, there is a bias caused by the default rule toward the negative classes leading to an unrealistic high performance beside the lack of prediction integrity between the theories. Here we discuss the problem of using one-versus-rest binarisation technique when it comes to evaluating multiclass data and propose several methods to remedy this problem. We also illustrate the methods and highlight their link to binary tree and Formal Concept Analysis (FCA). Our methods allow learning of a simple, consistent, and reliable multiclass theory by combining the rules of the multiple one-versus-rest theories into one rule list or rule set theory. Empirical evaluation over a number of data sets shows that our proposed methods produce coherent and accurate rule models from the rules learned by the ILP system of Aleph. PMID:24696657
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-11-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.
NASA Astrophysics Data System (ADS)
Grimaldi, S.; Petroselli, A.; Romano, N.
2012-04-01
The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.
Gordon, J.A.; Freedman, B.R.; Zuskov, A.; Iozzo, R.V.; Birk, D.E.; Soslowsky, L.J.
2015-01-01
Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn−/−) and biglycan-null (Bgn−/−) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. PMID:25888014
Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J
2015-07-16
Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.