Sample records for simple empirical correlation

  1. Redundant correlation effect on personalized recommendation

    NASA Astrophysics Data System (ADS)

    Qiu, Tian; Han, Teng-Yue; Zhong, Li-Xin; Zhang, Zi-Ke; Chen, Guang

    2014-02-01

    The high-order redundant correlation effect is investigated for a hybrid algorithm of heat conduction and mass diffusion (HHM), through both heat conduction biased (HCB) and mass diffusion biased (MDB) correlation redundancy elimination processes. The HCB and MDB algorithms do not introduce any additional tunable parameters, but keep the simple character of the original HHM. Based on two empirical datasets, the Netflix and MovieLens, the HCB and MDB are found to show better recommendation accuracy for both the overall objects and the cold objects than the HHM algorithm. Our work suggests that properly eliminating the high-order redundant correlations can provide a simple and effective approach to accurate recommendation.

  2. Simple, empirical approach to predict neutron capture cross sections from nuclear masses

    NASA Astrophysics Data System (ADS)

    Couture, A.; Casten, R. F.; Cakirli, R. B.

    2017-12-01

    Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.

  3. Power-Laws and Scaling in Finance: Empirical Evidence and Simple Models

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe

    We discuss several models that may explain the origin of power-law distributions and power-law correlations in financial time series. From an empirical point of view, the exponents describing the tails of the price increments distribution and the decay of the volatility correlations are rather robust and suggest universality. However, many of the models that appear naturally (for example, to account for the distribution of wealth) contain some multiplicative noise, which generically leads to non universal exponents. Recent progress in the empirical study of the volatility suggests that the volatility results from some sort of multiplicative cascade. A convincing `microscopic' (i.e. trader based) model that explains this observation is however not yet available. We discuss a rather generic mechanism for long-ranged volatility correlations based on the idea that agents constantly switch between active and inactive strategies depending on their relative performance.

  4. Dynamic correlations at different time-scales with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Nava, Noemi; Di Matteo, T.; Aste, Tomaso

    2018-07-01

    We introduce a simple approach which combines Empirical Mode Decomposition (EMD) and Pearson's cross-correlations over rolling windows to quantify dynamic dependency at different time scales. The EMD is a tool to separate time series into implicit components which oscillate at different time-scales. We apply this decomposition to intraday time series of the following three financial indices: the S&P 500 (USA), the IPC (Mexico) and the VIX (volatility index USA), obtaining time-varying multidimensional cross-correlations at different time-scales. The correlations computed over a rolling window are compared across the three indices, across the components at different time-scales and across different time lags. We uncover a rich heterogeneity of interactions, which depends on the time-scale and has important lead-lag relations that could have practical use for portfolio management, risk estimation and investment decisions.

  5. Statistical validity of using ratio variables in human kinetics research.

    PubMed

    Liu, Yuanlong; Schutz, Robert W

    2003-09-01

    The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.

  6. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  7. Constrained range expansion and climate change assessments

    Treesearch

    Yohay Carmel; Curtis H. Flather

    2006-01-01

    Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...

  8. Rates of profit as correlated sums of random variables

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2013-10-01

    Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.

  9. Volatility and correlation-based systemic risk measures in the US market

    NASA Astrophysics Data System (ADS)

    Civitarese, Jamil

    2016-10-01

    This paper deals with the problem of how to use simple systemic risk measures to assess portfolio risk characteristics. Using three simple examples taken from previous literature, one based on raw and partial correlations, another based on the eigenvalue decomposition of the covariance matrix and the last one based on an eigenvalue entropy, a Granger-causation analysis revealed some of them are not always a good measure of risk in the S&P 500 and in the VIX. The measures selected do not Granger-cause the VIX index in all windows selected; therefore, in the sense of risk as volatility, the indicators are not always suitable. Nevertheless, their results towards returns are similar to previous works that accept them. A deeper analysis has shown that any symmetric measure based on eigenvalue decomposition of correlation matrices, however, is not useful as a measure of "correlation" risk. The empirical counterpart analysis of this proposition stated that negative correlations are usually small and, therefore, do not heavily distort the behavior of the indicator.

  10. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.

    PubMed

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.

  11. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES

    PubMed Central

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    2016-01-01

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986

  12. Are relationships between pollen-ovule ratio and pollen and seed size explained by sex allocation?

    PubMed

    Burd, Martin

    2011-10-01

    Positive correlations between pollen-ovule ratio and seed size, and negative correlations between pollen-ovule ratio and pollen grain size have been noted frequently in a wide variety of angiosperm taxa. These relationships are commonly explained as a consequence of sex allocation on the basis of a simple model proposed by Charnov. Indeed, the theoretical expectation from the model has been the basis for interest in the empirical pattern. However, the predicted relationship is a necessary consequence of the mathematics of the model, which therefore has little explanatory power, even though its predictions are consistent with empirical results. The evolution of pollen-ovule ratios is likely to depend on selective factors affecting mating system, pollen presentation and dispensing, patterns of pollen receipt, pollen tube competition, female mate choice through embryo abortion, as well as genetic covariances among pollen, ovule, and seed size and other reproductive traits. To the extent the empirical correlations involving pollen-ovule ratios are interesting, they will need explanation in terms of a suite of selective factors. They are not explained simply by sex allocation trade-offs. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  13. Macro-evolutionary studies of cultural diversity: a review of empirical studies of cultural transmission and cultural adaptation.

    PubMed

    Mace, Ruth; Jordan, Fiona M

    2011-02-12

    A growing body of theoretical and empirical research has examined cultural transmission and adaptive cultural behaviour at the individual, within-group level. However, relatively few studies have tried to examine proximate transmission or test ultimate adaptive hypotheses about behavioural or cultural diversity at a between-societies macro-level. In both the history of anthropology and in present-day work, a common approach to examining adaptive behaviour at the macro-level has been through correlating various cultural traits with features of ecology. We discuss some difficulties with simple ecological associations, and then review cultural phylogenetic studies that have attempted to go beyond correlations to understand the underlying cultural evolutionary processes. We conclude with an example of a phylogenetically controlled approach to understanding proximate transmission pathways in Austronesian cultural diversity.

  14. Macro-evolutionary studies of cultural diversity: a review of empirical studies of cultural transmission and cultural adaptation

    PubMed Central

    Mace, Ruth; Jordan, Fiona M.

    2011-01-01

    A growing body of theoretical and empirical research has examined cultural transmission and adaptive cultural behaviour at the individual, within-group level. However, relatively few studies have tried to examine proximate transmission or test ultimate adaptive hypotheses about behavioural or cultural diversity at a between-societies macro-level. In both the history of anthropology and in present-day work, a common approach to examining adaptive behaviour at the macro-level has been through correlating various cultural traits with features of ecology. We discuss some difficulties with simple ecological associations, and then review cultural phylogenetic studies that have attempted to go beyond correlations to understand the underlying cultural evolutionary processes. We conclude with an example of a phylogenetically controlled approach to understanding proximate transmission pathways in Austronesian cultural diversity. PMID:21199844

  15. Evaluation of Phytoavailability of Heavy Metals to Chinese Cabbage (Brassica chinensis L.) in Rural Soils

    PubMed Central

    Hseu, Zeng-Yei; Zehetner, Franz

    2014-01-01

    This study compared the extractability of Cd, Cu, Ni, Pb, and Zn by 8 extraction protocols for 22 representative rural soils in Taiwan and correlated the extractable amounts of the metals with their uptake by Chinese cabbage for developing an empirical model to predict metal phytoavailability based on soil properties. Chemical agents in these protocols included dilute acids, neutral salts, and chelating agents, in addition to water and the Rhizon soil solution sampler. The highest concentrations of extractable metals were observed in the HCl extraction and the lowest in the Rhizon sampling method. The linear correlation coefficients between extractable metals in soil pools and metals in shoots were higher than those in roots. Correlations between extractable metal concentrations and soil properties were variable; soil pH, clay content, total metal content, and extractable metal concentration were considered together to simulate their combined effects on crop uptake by an empirical model. This combination improved the correlations to different extents for different extraction methods, particularly for Pb, for which the extractable amounts with any extraction protocol did not correlate with crop uptake by simple correlation analysis. PMID:25295297

  16. Theory of Financial Risk and Derivative Pricing

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2009-01-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  17. Theory of Financial Risk and Derivative Pricing - 2nd Edition

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2003-12-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  18. Heat-transfer processes in air-cooled engine cylinders

    NASA Technical Reports Server (NTRS)

    Pinkel, Benjamin

    1938-01-01

    From a consideration of heat-transfer theory, semi-empirical expressions are set up for the transfer of heat from the combustion gases to the cylinder of an air-cooled engine and from the cylinder to the cooling air. Simple equations for the average head and barrel temperatures as functions of the important engine and cooling variables are obtained from these expressions. The expressions involve a few empirical constants, which may be readily determined from engine tests. Numerical values for these constants were obtained from single-cylinder engine tests for cylinders of the Pratt & Whitney 1535 and 1340-h engines. The equations provide a means of calculating the effect of the various engine and cooling variables on the cylinder temperatures and also of correlating the results of engine cooling tests. An example is given of the application of the equations to the correlation of cooling-test data obtained in flight.

  19. Wind-chill-equivalent temperatures: regarding the impact due to the variability of the environmental convective heat transfer coefficient.

    PubMed

    Shitzer, Avraham

    2006-03-01

    The wind-chill index (WCI), developed in Antarctica in the 1940s and recently updated by the weather services in the USA and Canada, expresses the enhancement of heat loss in cold climates from exposed body parts, e.g., face, due to wind. The index provides a simple and practical means for assessing the thermal effects of wind on humans outdoors. It is also used for indicating weather conditions that may pose adverse risks of freezing at subfreezing environmental temperatures. Values of the WCI depend on a number of parameters, i.e, temperatures, physical properties of the air, wind speed, etc., and on insolation and evaporation. This paper focuses on the effects of various empirical correlations used in the literature for calculating the convective heat transfer coefficients between humans and their environment. Insolation and evaporation are not included in the presentation. Large differences in calculated values among these correlations are demonstrated and quantified. Steady-state wind-chill-equivalent temperatures (WCETs) are estimated by a simple, one-dimensional heat-conducting hollow-cylindrical model using these empirical correlations. Partial comparison of these values with the published "new" WCETs is presented. The variability of the estimated WCETs, due to different correlations employed to calculate them, is clearly demonstrated. The results of this study clearly suggest the need for establishing a "gold standard" for estimating convective heat exchange between exposed body elements and the cold and windy environment. This should be done prior to the introduction and adoption of further modifications to WCETs and indices. Correlations to estimate the convective heat transfer coefficients between exposed body parts of humans in windy and cold environments influence the WCETs and need to be standardized.

  20. Wind-chill-equivalent temperatures: regarding the impact due to the variability of the environmental convective heat transfer coefficient

    NASA Astrophysics Data System (ADS)

    Shitzer, Avraham

    2006-03-01

    The wind-chill index (WCI), developed in Antarctica in the 1940s and recently updated by the weather services in the USA and Canada, expresses the enhancement of heat loss in cold climates from exposed body parts, e.g., face, due to wind. The index provides a simple and practical means for assessing the thermal effects of wind on humans outdoors. It is also used for indicating weather conditions that may pose adverse risks of freezing at subfreezing environmental temperatures. Values of the WCI depend on a number of parameters, i.e, temperatures, physical properties of the air, wind speed, etc., and on insolation and evaporation. This paper focuses on the effects of various empirical correlations used in the literature for calculating the convective heat transfer coefficients between humans and their environment. Insolation and evaporation are not included in the presentation. Large differences in calculated values among these correlations are demonstrated and quantified. Steady-state wind-chill-equivalent temperatures (WCETs) are estimated by a simple, one-dimensional heat-conducting hollow-cylindrical model using these empirical correlations. Partial comparison of these values with the published “new” WCETs is presented. The variability of the estimated WCETs, due to different correlations employed to calculate them, is clearly demonstrated. The results of this study clearly suggest the need for establishing a “gold standard” for estimating convective heat exchange between exposed body elements and the cold and windy environment. This should be done prior to the introduction and adoption of further modifications to WCETs and indices. Correlations to estimate the convective heat transfer coefficients between exposed body parts of humans in windy and cold environments influence the WCETs and need to be standardized.

  1. λ-Repressor Oligomerization Kinetics at High Concentrations Using Fluorescence Correlation Spectroscopy in Zero-Mode Waveguides

    PubMed Central

    Samiee, K. T.; Foquet, M.; Guo, L.; Cox, E. C.; Craighead, H. G.

    2005-01-01

    Fluorescence correlation spectroscopy (FCS) has demonstrated its utility for measuring transport properties and kinetics at low fluorophore concentrations. In this article, we demonstrate that simple optical nanostructures, known as zero-mode waveguides, can be used to significantly reduce the FCS observation volume. This, in turn, allows FCS to be applied to solutions with significantly higher fluorophore concentrations. We derive an empirical FCS model accounting for one-dimensional diffusion in a finite tube with a simple exponential observation profile. This technique is used to measure the oligomerization of the bacteriophage λ repressor protein at micromolar concentrations. The results agree with previous studies utilizing conventional techniques. Additionally, we demonstrate that the zero-mode waveguides can be used to assay biological activity by measuring changes in diffusion constant as a result of ligand binding. PMID:15613638

  2. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  3. Time series with tailored nonlinearities

    NASA Astrophysics Data System (ADS)

    Räth, C.; Laut, I.

    2015-10-01

    It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations.

  4. New Approaches in Force-Limited Vibration Testing of Flight Hardware

    NASA Technical Reports Server (NTRS)

    Kolaini, Ali R.; Kern, Dennis L.

    2012-01-01

    To qualify flight hardware for random vibration environments the following methods are used to limit the loads in the aerospace industry: (1) Response limiting and notching (2) Simple TDOF model (3) Semi-empirical force limits (4) Apparent mass, etc. and (5) Impedance method. In all these methods attempts are made to remove conservatism due to the mismatch in impedances between the test and the flight configurations of the hardware that are being qualified. Assumption is the hardware interfaces have correlated responses. A new method that takes into account the un-correlated hardware interface responses are described in this presentation.

  5. Reconstructing the primordial spectrum of fluctuations of the universe from the observed nonlinear clustering of galaxies

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.; Matthews, Alex; Kumar, P.; Lu, Edward

    1991-01-01

    It was discovered that the nonlinear evolution of the two point correlation function in N-body experiments of galaxy clustering with Omega = 1 appears to be described to good approximation by a simple general formula. The underlying form of the formula is physically motivated, but its detailed representation is obtained empirically by fitting to N-body experiments. In this paper, the formula is presented along with an inverse formula which converts a final, nonlinear correlation function into the initial linear correlation function. The inverse formula is applied to observational data from the CfA, IRAs, and APM galaxy surveys, and the initial spectrum of fluctuations of the universe, if Omega = 1.

  6. The limitations of simple gene set enrichment analysis assuming gene independence.

    PubMed

    Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P

    2016-02-01

    Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.

  7. What can we learn about dispersion from the conformer surface of n-pentane?

    PubMed

    Martin, Jan M L

    2013-04-11

    In earlier work [Gruzman, D. ; Karton, A.; Martin, J. M. L. J. Phys. Chem. A 2009, 113, 11974], we showed that conformer energies in alkanes (and other systems) are highly dispersion-driven and that uncorrected DFT functionals fail badly at reproducing them, while simple empirical dispersion corrections tend to overcorrect. To gain greater insight into the nature of the phenomenon, we have mapped the torsional surface of n-pentane to 10-degree resolution at the CCSD(T)-F12 level near the basis set limit. The data obtained have been decomposed by order of perturbation theory, excitation level, and same-spin vs opposite-spin character. A large number of approximate electronic structure methods have been considered, as well as several empirical dispersion corrections. Our chief conclusions are as follows: (a) the effect of dispersion is dominated by same-spin correlation (or triplet-pair correlation, from a different perspective); (b) singlet-pair correlation is important for the surface, but qualitatively very dissimilar to the dispersion component; (c) single and double excitations beyond third order are essentially unimportant for this surface; (d) connected triple excitations do play a role but are statistically very similar to the MP2 singlet-pair correlation; (e) the form of the damping function is crucial for good performance of empirical dispersion corrections; (f) at least in the lower-energy regions, SCS-MP2 and especially MP2.5 perform very well; (g) novel spin-component scaled double hybrid functionals such as DSD-PBEP86-D2 acquit themselves very well for this problem.

  8. Topology of correlation-based minimal spanning trees in real and model markets

    NASA Astrophysics Data System (ADS)

    Bonanno, Giovanni; Caldarelli, Guido; Lillo, Fabrizio; Mantegna, Rosario N.

    2003-10-01

    We compare the topological properties of the minimal spanning tree obtained from a large group of stocks traded at the New York Stock Exchange during a 12-year trading period with the one obtained from surrogated data simulated by using simple market models. We find that the empirical tree has features of a complex network that cannot be reproduced, even as a first approximation, by a random market model and by the widespread one-factor model.

  9. Investigation of pressure drop in capillary tube for mixed refrigerant Joule-Thomson cryocooler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ardhapurkar, P. M.; Sridharan, Arunkumar; Atrey, M. D.

    2014-01-29

    A capillary tube is commonly used in small capacity refrigeration and air-conditioning systems. It is also a preferred expansion device in mixed refrigerant Joule-Thomson (MR J-T) cryocoolers, since it is inexpensive and simple in configuration. However, the flow inside a capillary tube is complex, since flashing process that occurs in case of refrigeration and air-conditioning systems is metastable. A mixture of refrigerants such as nitrogen, methane, ethane, propane and iso-butane expands below its inversion temperature in the capillary tube of MR J-T cryocooler and reaches cryogenic temperature. The mass flow rate of refrigerant mixture circulating through capillary tube depends onmore » the pressure difference across it. There are many empirical correlations which predict pressure drop across the capillary tube. However, they have not been tested for refrigerant mixtures and for operating conditions of the cryocooler. The present paper assesses the existing empirical correlations for predicting overall pressure drop across the capillary tube for the MR J-T cryocooler. The empirical correlations refer to homogeneous as well as separated flow models. Experiments are carried out to measure the overall pressure drop across the capillary tube for the cooler. Three different compositions of refrigerant mixture are used to study the pressure drop variations. The predicted overall pressure drop across the capillary tube is compared with the experimentally obtained value. The predictions obtained using homogeneous model show better match with the experimental results compared to separated flow models.« less

  10. Hydrogeomorphology explains acidification-driven variation in aquatic biological communities in the Neversink Basin, USA

    USGS Publications Warehouse

    Harpold, Adrian A.; Burns, Douglas A.; Walter, M.T.; Steenhuis, Tammo S.

    2013-01-01

    Describing the distribution of aquatic habitats and the health of biological communities can be costly and time-consuming; therefore, simple, inexpensive methods to scale observations of aquatic biota to watersheds that lack data would be useful. In this study, we explored the potential of a simple “hydrogeomorphic” model to predict the effects of acid deposition on macroinvertebrate, fish, and diatom communities in 28 sub-watersheds of the 176-km2 Neversink River basin in the Catskill Mountains of New York State. The empirical model was originally developed to predict stream-water acid neutralizing capacity (ANC) using the watershed slope and drainage density. Because ANC is known to be strongly related to aquatic biological communities in the Neversink, we speculated that the model might correlate well with biotic indicators of ANC response. The hydrogeomorphic model was strongly correlated to several measures of macroinvertebrate and fish community richness and density, but less strongly correlated to diatom acid tolerance. The model was also strongly correlated to biological communities in 18 sub-watersheds independent of the model development, with the linear correlation capturing the strongly acidic nature of small upland watersheds (2). Overall, we demonstrated the applicability of geospatial data sets and a simple hydrogeomorphic model for estimating aquatic biological communities in areas with stream-water acidification, allowing estimates where no direct field observations are available. Similar modeling approaches have the potential to complement or refine expensive and time-consuming measurements of aquatic biota populations and to aid in regional assessments of aquatic health.

  11. An empirical investigation on different methods of economic growth rate forecast and its behavior from fifteen countries across five continents

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.

  12. Atlas of susceptibility to pollution in marinas. Application to the Spanish coast.

    PubMed

    Gómez, Aina G; Ondiviela, Bárbara; Fernández, María; Juanes, José A

    2017-01-15

    An atlas of susceptibility to pollution of 320 Spanish marinas is provided. Susceptibility is assessed through a simple, fast and low cost empirical method estimating the flushing capacity of marinas. The Complexity Tidal Range Index (CTRI) was selected among eleven empirical methods. The CTRI method was selected by means of statistical analyses because: it contributes to explain the system's variance; it is highly correlated to numerical model results; and, it is sensitive to marinas' location and typology. The process of implementation to the Spanish coast confirmed its usefulness, versatility and adaptability as a tool for the environmental management of marinas worldwide. The atlas of susceptibility, assessed through CTRI values, is an appropriate instrument to prioritize environmental and planning strategies at a regional scale. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Effect of ionic radii on the Curie temperature in Ba1-x-ySrxCayTiO3 compounds.

    PubMed

    Berenov, A; Le Goupil, F; Alford, N

    2016-06-21

    A series of Ba1-x-ySrxCayTiO3 compounds were prepared with varying average ionic radii and cation disorder on A-site. All samples showed typical ferroelectric behavior. A simple empirical equation correlated Curie temperature, TC, with the values of ionic radii of A-site cations. This correlation was related to the distortion of TiO6 octahedra observed during neutron diffraction studies. The equation was used for the selection of compounds with predetermined values of TC. The effects of A-site ionic radii on the temperatures of phase transitions in Ba1-x-ySrxCayTiO3 were discussed.

  14. Parametrization of semiempirical models against ab initio crystal data: evaluation of lattice energies of nitrate salts.

    PubMed

    Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav

    2005-09-01

    A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.

  15. The role of hip and chest radiographs in osteoporotic evaluation among south Indian women population: a comparative scenario with DXA.

    PubMed

    Kumar, D Ashok; Anburajan, M

    2014-05-01

    Osteoporosis is recognized as a worldwide skeletal disorder problem. In India, the older as well as postmenopausal women population suffering from osteoporotic fractures has been a common issue. Bone mineral density measurements gauged by dual-energy X-ray absorptiometry (DXA) are used in the diagnosis of osteoporosis. (1) To evaluate osteoporosis in south Indian women by radiogrammetric method in a comparative perspective with DXA. (2) To assess the capability of KJH; Anburajan's Empirical formula in the prediction of total hip bone mineral density (T.BMD) with estimated Hologic T.BMD. In this cross-sectional design, 56 south Indian women were evaluated. These women were randomly selected from a health camp. The patients with secondary bone diseases were excluded. The standard protocol was followed in acquiring BMD of the right proximal femur by DPX Prodigy (DXA Scanner, GE-Lunar Corp., USA). The measured Lunar Total hip BMD was converted into estimated Hologic Total hip BMD. In addition, the studied population underwent chest and hip radiographic measurements. Combined cortical thickness of clavicle has been used in KJH; Anburajan's Empirical formula to predict T.BMD and compared with estimated Hologic T.BMD by DXA. The correlation coefficients exhibited high significance. The combined cortical thickness of clavicle and femur shaft of total studied population was strongly correlated with DXA femur T.BMD measurements (r = 0.87, P < 0.01 and r = 0.45, P < 0.01) and it is also having strong correlation with low bone mass group (r = 0.87, P < 0.01 and r = 0.67, P < 0.01) KJH; Anburajan's Empirical formula shows significant correlation with estimated Hologic T.BMD (r = 0.88, P < 0.01) in total studied population. The empirical formula was identified as better tool for predicting osteoporosis in total population and old-aged population with a sensitivity (88.8 and 95.6 %), specificity (89.6 and 90.9 %), positive predictive value (88.8 and 95.6 %) and negative predictive value (89.6 and 90.9 %), respectively. The results suggest that combined cortical thickness of clavicle and femur shaft using radiogrammetric method is significantly correlated with DXA. Moreover, KJH; Anburajan's Empirical formula is useful and better index than other simple radiogrammetry measurements in the evaluation of osteoporosis from the economical and widely available digital radiographs.

  16. NO RELATIONSHIP BETWEEN INTELLIGENCE AND FACIAL ATTRACTIVENESS IN A LARGE, GENETICALLY INFORMATIVE SAMPLE

    PubMed Central

    Mitchem, Dorian G.; Zietsch, Brendan P.; Wright, Margaret J.; Martin, Nicholas G.; Hewitt, John K.; Keller, Matthew C.

    2015-01-01

    Theories in both evolutionary and social psychology suggest that a positive correlation should exist between facial attractiveness and general intelligence, and several empirical observations appear to corroborate this expectation. Using highly reliable measures of facial attractiveness and IQ in a large sample of identical and fraternal twins and their siblings, we found no evidence for a phenotypic correlation between these traits. Likewise, neither the genetic nor the environmental latent factor correlations were statistically significant. We supplemented our analyses of new data with a simple meta-analysis that found evidence of publication bias among past studies of the relationship between facial attractiveness and intelligence. In view of these results, we suggest that previously published reports may have overestimated the strength of the relationship and that the theoretical bases for the predicted attractiveness-intelligence correlation may need to be reconsidered. PMID:25937789

  17. Solar energy distribution over Egypt using cloudiness from Meteosat photos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosalam Shaltout, M.A.; Hassen, A.H.

    1990-01-01

    In Egypt, there are 10 ground stations for measuring the global solar radiation, and five stations for measuring the diffuse solar radiation. Every day at noon, the Meteorological Authority in Cairo receives three photographs of cloudiness over Egypt from the Meteosat satellite, one in the visible, and two in the infra-red bands (10.5-12.5 {mu}m) and (5.7-7.1 {mu}m). The monthly average cloudiness for 24 sites over Egypt are measured and calculated from Meteosat observations during the period 1985-1986. Correlation analysis between the cloudiness observed by Meteosat and global solar radiation measured from the ground stations is carried out. It is foundmore » that, the correlation coefficients are about 0.90 for the simple linear regression, and increase for the second and third degree regressions. Also, the correlation coefficients for the cloudiness with the diffuse solar radiation are about 0.80 for the simple linear regression, and increase for the second and third degree regression. Models and empirical relations for estimating the global and diffuse solar radiation from Meteosat cloudiness data over Egypt are deduced and tested. Seasonal maps for the global and diffuse radiation over Egypt are carried out.« less

  18. A Simple and Robust Statistical Test for Detecting the Presence of Recombination

    PubMed Central

    Bruen, Trevor C.; Philippe, Hervé; Bryant, David

    2006-01-01

    Recombination is a powerful evolutionary force that merges historically distinct genotypes. But the extent of recombination within many organisms is unknown, and even determining its presence within a set of homologous sequences is a difficult question. Here we develop a new statistic, Φw, that can be used to test for recombination. We show through simulation that our test can discriminate effectively between the presence and absence of recombination, even in diverse situations such as exponential growth (star-like topologies) and patterns of substitution rate correlation. A number of other tests, Max χ2, NSS, a coalescent-based likelihood permutation test (from LDHat), and correlation of linkage disequilibrium (both r2 and |D′|) with distance, all tend to underestimate the presence of recombination under strong population growth. Moreover, both Max χ2 and NSS falsely infer the presence of recombination under a simple model of mutation rate correlation. Results on empirical data show that our test can be used to detect recombination between closely as well as distantly related samples, regardless of the suspected rate of recombination. The results suggest that Φw is one of the best approaches to distinguish recurrent mutation from recombination in a wide variety of circumstances. PMID:16489234

  19. Updates on Force Limiting Improvements

    NASA Technical Reports Server (NTRS)

    Kolaini, Ali R.; Scharton, Terry

    2013-01-01

    The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.

  20. Public–private interaction in pharmaceutical research

    PubMed Central

    Cockburn, Iain; Henderson, Rebecca

    1996-01-01

    We empirically examine interaction between the public and private sectors in pharmaceutical research using qualitative data on the drug discovery process and quantitative data on the incidence of coauthorship between public and private institutions. We find evidence of significant reciprocal interaction, and reject a simple “linear” dichotomous model in which the public sector performs basic research and the private sector exploits it. Linkages to the public sector differ across firms, reflecting variation in internal incentives and policy choices, and the nature of these linkages correlates with their research performance. PMID:8917485

  1. Scaling laws between population and facility densities.

    PubMed

    Um, Jaegon; Son, Seung-Woo; Lee, Sung-Ik; Jeong, Hawoong; Kim, Beom Jun

    2009-08-25

    When a new facility like a grocery store, a school, or a fire station is planned, its location should ideally be determined by the necessities of people who live nearby. Empirically, it has been found that there exists a positive correlation between facility and population densities. In the present work, we investigate the ideal relation between the population and the facility densities within the framework of an economic mechanism governing microdynamics. In previous studies based on the global optimization of facility positions in minimizing the overall travel distance between people and facilities, it was shown that the density of facility D and that of population rho should follow a simple power law D approximately rho(2/3). In our empirical analysis, on the other hand, the power-law exponent alpha in D approximately rho(alpha) is not a fixed value but spreads in a broad range depending on facility types. To explain this discrepancy in alpha, we propose a model based on economic mechanisms that mimic the competitive balance between the profit of the facilities and the social opportunity cost for populations. Through our simple, microscopically driven model, we show that commercial facilities driven by the profit of the facilities have alpha = 1, whereas public facilities driven by the social opportunity cost have alpha = 2/3. We simulate this model to find the optimal positions of facilities on a real U.S. map and show that the results are consistent with the empirical data.

  2. Ranking structures and rank-rank correlations of countries: The FIFA and UEFA cases

    NASA Astrophysics Data System (ADS)

    Ausloos, Marcel; Cloots, Rudi; Gadomski, Adam; Vitanov, Nikolay K.

    2014-04-01

    Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures in both cases.

  3. Glass-Transition Temperature of the β-Relaxation as the Major Predictive Parameter for Recrystallization of Neat Amorphous Drugs.

    PubMed

    Kissi, Eric Ofosu; Grohganz, Holger; Löbmann, Korbinian; Ruggiero, Michael T; Zeitler, J Axel; Rades, Thomas

    2018-03-15

    Recrystallization of amorphous drugs is currently limiting the simple approach to improve solubility and bioavailability of poorly water-soluble drugs by amorphization of a crystalline form of the drug. In view of this, molecular mobility, α-relaxation and β-relaxation processes with the associated transition temperatures T gα and T gβ , was investigated using dynamic mechanical analysis (DMA). The correlation between the transition temperatures and the onset of recrystallization for nine amorphous drugs, stored under dry conditions at a temperature of 296 K, was determined. From the results obtained, T gα does not correlate with the onset of recrystallization under the experimental storage conditions. However, a clear correlation between T gβ and the onset of recrystallization was observed. It is shown that at storage temperature below T gβ , amorphous nifedipine retains its amorphous form. On the basis of the correlation, an empirical correlation is proposed for predicting the onset of recrystallization for drugs stored at 0% RH and 296 K.

  4. Correlated bursts and the role of memory range

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Perotti, Juan I.; Kaski, Kimmo; Kertész, János

    2015-08-01

    Inhomogeneous temporal processes in natural and social phenomena have been described by bursts that are rapidly occurring events within short time periods alternating with long periods of low activity. In addition to the analysis of heavy-tailed interevent time distributions, higher-order correlations between interevent times, called correlated bursts, have been studied only recently. As the underlying mechanism behind such correlated bursts is far from being fully understood, we devise a simple model for correlated bursts using a self-exciting point process with a variable range of memory. Whether a new event occurs is stochastically determined by a memory function that is the sum of decaying memories of past events. In order to incorporate the noise and/or limited memory capacity of systems, we apply two memory loss mechanisms: a fixed number or a variable number of memories. By analysis and numerical simulations, we find that too much memory effect may lead to a Poissonian process, implying that there exists an intermediate range of memory effect to generate correlated bursts comparable to empirical findings. Our conclusions provide a deeper understanding of how long-range memory affects correlated bursts.

  5. A semi-empirical model for the estimation of maximum horizontal displacement due to liquefaction-induced lateral spreading

    USGS Publications Warehouse

    Faris, Allison T.; Seed, Raymond B.; Kayen, Robert E.; Wu, Jiaer

    2006-01-01

    During the 1906 San Francisco Earthquake, liquefaction-induced lateral spreading and resultant ground displacements damaged bridges, buried utilities, and lifelines, conventional structures, and other developed works. This paper presents an improved engineering tool for the prediction of maximum displacement due to liquefaction-induced lateral spreading. A semi-empirical approach is employed, combining mechanistic understanding and data from laboratory testing with data and lessons from full-scale earthquake field case histories. The principle of strain potential index, based primary on correlation of cyclic simple shear laboratory testing results with in-situ Standard Penetration Test (SPT) results, is used as an index to characterized the deformation potential of soils after they liquefy. A Bayesian probabilistic approach is adopted for development of the final predictive model, in order to take fullest advantage of the data available and to deal with the inherent uncertainties intrinstiic to the back-analyses of field case histories. A case history from the 1906 San Francisco Earthquake is utilized to demonstrate the ability of the resultant semi-empirical model to estimate maximum horizontal displacement due to liquefaction-induced lateral spreading.

  6. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2018-04-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  7. Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process

    PubMed Central

    Finley, Benjamin J.; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621

  8. Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.

    PubMed

    Finley, Benjamin J; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.

  9. Empirical analysis on future-cash arbitrage risk with portfolio VaR

    NASA Astrophysics Data System (ADS)

    Chen, Rongda; Li, Cong; Wang, Weijin; Wang, Ze

    2014-03-01

    This paper constructs the positive arbitrage position by alternating the spot index with Chinese Exchange Traded Fund (ETF) portfolio and estimating the arbitrage-free interval of futures with the latest trade data. Then, an improved Delta-normal method was used, which replaces the simple linear correlation coefficient with tail dependence correlation coefficient, to measure VaR (Value-at-risk) of the arbitrage position. Analysis of VaR implies that the risk of future-cash arbitrage is less than that of investing completely in either futures or spot market. Then according to the compositional VaR and the marginal VaR, we should increase the futures position and decrease the spot position appropriately to minimize the VaR, which can minimize risk subject to certain revenues.

  10. Markov Decision Process Measurement Model.

    PubMed

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  11. Persistence and stochastic periodicity in the intensity dynamics of a fiber laser during the transition to optical turbulence

    NASA Astrophysics Data System (ADS)

    Carpi, Laura; Masoller, Cristina

    2018-02-01

    Many natural systems display transitions among different dynamical regimes, which are difficult to identify when the data are noisy and high dimensional. A technologically relevant example is a fiber laser, which can display complex dynamical behaviors that involve nonlinear interactions of millions of cavity modes. Here we study the laminar-turbulence transition that occurs when the laser pump power is increased. By applying various data analysis tools to empirical intensity time series we characterize their persistence and demonstrate that at the transition temporal correlations can be precisely represented by a surprisingly simple model.

  12. DUST CONTINUUM EMISSION AS A TRACER OF GAS MASS IN GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groves, Brent A.; Schinnerer, Eva; Walter, Fabian

    2015-01-20

    We use a sample of 36 galaxies from the KINGFISH (Herschel IR), HERACLES (IRAM CO), and THINGS (Very Large Array H I) surveys to study empirical relations between Herschel infrared (IR) luminosities and the total mass of the interstellar gas (H{sub 2} + H I). Such a comparison provides a simple empirical relationship without introducing the uncertainty of dust model fitting. We find tight correlations, and provide fits to these relations, between Herschel luminosities and the total gas mass integrated over entire galaxies, with the tightest, almost linear, correlation found for the longest wavelength data (SPIRE 500). However, we findmore » that accounting for the gas-phase metallicity (affecting the dust to gas ratio) is crucial when applying these relations to low-mass, and presumably high-redshift, galaxies. The molecular (H{sub 2}) gas mass is found to be better correlated with the peak of the IR emission (e.g., PACS160), driven mostly by the correlation of stellar mass and mean dust temperature. When examining these relations as a function of galactocentric radius, we find the same correlations, albeit with a larger scatter, up to a radius of r ∼ 0.7 r {sub 25} (containing most of a galaxy's baryonic mass). However, beyond that radius, the same correlations no longer hold, with increasing gas (predominantly H I) mass relative to the infrared emission. The tight relations found for the bulk of the galaxy's baryonic content suggest that total gas masses of disk-like (non-merging/ULIRG) galaxies can be inferred from far-infrared continuum measurements in situations where only the latter are available, e.g., in ALMA continuum observations of high-redshift galaxies.« less

  13. On the predictability of land surface fluxes from meteorological variables

    NASA Astrophysics Data System (ADS)

    Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.

    2018-01-01

    Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.

  14. Empirical ionization fractions in the winds and the determination of mass-loss rates for early-type stars

    NASA Technical Reports Server (NTRS)

    Lamers, H. J. G. L. M.; Gathier, R.; Snow, T. P.

    1980-01-01

    From a study of the UV lines in the spectra of 25 stars from 04 to B1, the empirical relations between the mean density in the wind and the ionization fractions of O VI, N V, Si IV, and the excited C III (2p 3P0) level were derived. Using these empirical relations, a simple relation was derived between the mass-loss rate and the column density of any of these four ions. This relation can be used for a simple determination of the mass-loss rate from O4 to B1 stars.

  15. Volatility of linear and nonlinear time series

    NASA Astrophysics Data System (ADS)

    Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo

    2005-07-01

    Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.

  16. An empirical comparative study on biological age estimation algorithms with an application of Work Ability Index (WAI).

    PubMed

    Cho, Il Haeng; Park, Kyung S; Lim, Chang Joo

    2010-02-01

    In this study, we described the characteristics of five different biological age (BA) estimation algorithms, including (i) multiple linear regression, (ii) principal component analysis, and somewhat unique methods developed by (iii) Hochschild, (iv) Klemera and Doubal, and (v) a variant of Klemera and Doubal's method. The objective of this study is to find the most appropriate method of BA estimation by examining the association between Work Ability Index (WAI) and the differences of each algorithm's estimates from chronological age (CA). The WAI was found to be a measure that reflects an individual's current health status rather than the deterioration caused by a serious dependency with the age. Experiments were conducted on 200 Korean male participants using a BA estimation system developed principally under the concept of non-invasive, simple to operate and human function-based. Using the empirical data, BA estimation as well as various analyses including correlation analysis and discriminant function analysis was performed. As a result, it had been confirmed by the empirical data that Klemera and Doubal's method with uncorrelated variables from principal component analysis produces relatively reliable and acceptable BA estimates. 2009 Elsevier Ireland Ltd. All rights reserved.

  17. Fluid mechanical scaling of impact craters in unconsolidated granular materials

    NASA Astrophysics Data System (ADS)

    Miranda, Colin S.; Dowling, David R.

    2015-11-01

    A single scaling law is proposed for the diameter of simple low- and high-speed impact craters in unconsolidated granular materials where spall is not apparent. The scaling law is based on the assumption that gravity- and shock-wave effects set crater size, and is formulated in terms of a dimensionless crater diameter, and an empirical combination of Froude and Mach numbers. The scaling law involves the kinetic energy and speed of the impactor, the acceleration of gravity, and the density and speed of sound in the target material. The size of the impactor enters the formulation but divides out of the final empirical result. The scaling law achieves a 98% correlation with available measurements from drop tests, ballistic tests, missile impacts, and centrifugally-enhanced gravity impacts for a variety of target materials (sand, alluvium, granulated sugar, and expanded perlite). The available measurements cover more than 10 orders of magnitude in impact energy. For subsonic and supersonic impacts, the crater diameter is found to scale with the 1/4- and 1/6-power, respectively, of the impactor kinetic energy with the exponent crossover occurring near a Mach number of unity. The final empirical formula provides insight into how impact energy partitioning depends on Mach number.

  18. Internal (Annular) and Compressible External (Flat Plate) Turbulent Flow Heat Transfer Correlations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence; Smith, Justin

    Here we provide a discussion regarding the applicability of a family of traditional heat transfer correlation based models for several (unit level) heat transfer problems associated with flight heat transfer estimates and internal flow heat transfer associated with an experimental simulation design (Dobranich 2014). Variability between semi-empirical free-flight models suggests relative differences for heat transfer coefficients on the order of 10%, while the internal annular flow behavior is larger with differences on the order of 20%. We emphasize that these expressions are strictly valid only for the geometries they have been derived for e.g. the fully developed annular flow ormore » simple external flow problems. Though, the application of flat plate skin friction estimate to cylindrical bodies is a traditional procedure to estimate skin friction and heat transfer, an over-prediction bias is often observed using these approximations for missile type bodies. As a correction for this over-estimate trend, we discuss a simple scaling reduction factor for flat plate turbulent skin friction and heat transfer solutions (correlations) applied to blunt bodies of revolution at zero angle of attack. The method estimates the ratio between axisymmetric and 2-d stagnation point heat transfer skin friction and Stanton number solution expressions for sub-turbulent Reynolds numbers %3C1x10 4 . This factor is assumed to also directly influence the flat plate results applied to the cylindrical portion of the flow and the flat plate correlations are modified by« less

  19. [Are simple time lags responsible for cyclic variation of population density? : A comparison of laboratory population dynamics of Brachionus calyciflorus pallas (rotatoria) with computer simulations].

    PubMed

    Halbach, Udo; Burkhardt, Heinz Jürgen

    1972-09-01

    Laboratory populations of the rotifer Brachionus calyciflorus were cultured at different temperatures (25, 20, 15°C) but otherwise at constant conditions. The population densities showed relatively constant oscillations (Figs. 1 to 3A-C). Amplitudes and frequencies of the oscillations were positively correlated with temperature (Table 1). A test was made, whether the logistic growth function with simple time lag is able to describe the population curves. There are strong similarities between the simulations (Figs. 1-3E) and the real population dynamics if minor adjustments of the empirically determined parameters are made. There-fore it is suggested that time lags are responsible for the observed oscillations. However, the actual time lags probably do not act in the simple manner of the model, because birth and death rates react with different time lags, and both parameters are dependent on individual age and population density. A more complex model, which incorporates these modifications, should lead to a more realistic description of the observed oscillations.

  20. Helicopter rotor and engine sizing for preliminary performance estimation

    NASA Technical Reports Server (NTRS)

    Talbot, P. D.; Bowles, J. V.; Lee, H. C.

    1986-01-01

    Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.

  1. The danger within: the role of genetic, behavioural and ecological factors in population persistence of colour polymorphic species.

    PubMed

    Bolton, Peri E; Rollins, Lee A; Griffith, Simon C

    2015-06-01

    Polymorphic species have been the focus of important work in evolutionary biology. It has been suggested that colour polymorphic species have specific evolutionary and population dynamics that enable them to persist through environmental changes better than less variable species. We suggest that recent empirical and theoretical work indicates that polymorphic species may be more vulnerable to extinction than previously thought. This vulnerability arises because these species often have a number of correlated sexual, behavioural, life history and ecological traits, which can have a simple genetic underpinning. When exacerbated by environmental change, these alternate strategies can lead to conflict between morphs at the genomic and population levels, which can directly or indirectly affect population and evolutionary dynamics. In this perspective, we identify a number of ways in which the nature of the correlated traits, their underpinning genetic architecture, and the inevitable interactions between colour morphs can result in a reduction in population fitness. The principles illustrated here apply to all kinds of discrete polymorphism (e.g. behavioural syndromes), but we focus primarily on colour polymorphism because they are well studied. We urge further empirical investigation of the genetic architecture and interactions in polymorphic species to elucidate the impact on population fitness. © 2015 John Wiley & Sons Ltd.

  2. An energetic scale for equilibrium H/D fractionation factors illuminates hydrogen bond free energies in proteins

    PubMed Central

    Cao, Zheng; Bowie, James U

    2014-01-01

    Equilibrium H/D fractionation factors have been extensively employed to qualitatively assess hydrogen bond strengths in protein structure, enzyme active sites, and DNA. It remains unclear how fractionation factors correlate with hydrogen bond free energies, however. Here we develop an empirical relationship between fractionation factors and free energy, allowing for the simple and quantitative measurement of hydrogen bond free energies. Applying our empirical relationship to prior fractionation factor studies in proteins, we find: [1] Within the folded state, backbone hydrogen bonds are only marginally stronger on average in α-helices compared to β-sheets by ∼0.2 kcal/mol. [2] Charge-stabilized hydrogen bonds are stronger than neutral hydrogen bonds by ∼2 kcal/mol on average, and can be as strong as –7 kcal/mol. [3] Changes in a few hydrogen bonds during an enzyme catalytic cycle can stabilize an intermediate state by –4.2 kcal/mol. [4] Backbone hydrogen bonds can make a large overall contribution to the energetics of conformational changes, possibly playing an important role in directing conformational changes. [5] Backbone hydrogen bonding becomes more uniform overall upon ligand binding, which may facilitate participation of the entire protein structure in events at the active site. Our energetic scale provides a simple method for further exploration of hydrogen bond free energies. PMID:24501090

  3. Defying Intuition: Demonstrating the Importance of the Empirical Technique.

    ERIC Educational Resources Information Center

    Kohn, Art

    1992-01-01

    Describes a classroom activity featuring a simple stay-switch probability game. Contends that the exercise helps students see the importance of empirically validating beliefs. Includes full instructions for conducting and discussing the exercise. (CFR)

  4. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  5. A global empirical system for probabilistic seasonal climate prediction

    NASA Astrophysics Data System (ADS)

    Eden, J. M.; van Oldenborgh, G. J.; Hawkins, E.; Suckling, E. B.

    2015-12-01

    Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.

  6. An empirical system for probabilistic seasonal climate prediction

    NASA Astrophysics Data System (ADS)

    Eden, Jonathan; van Oldenborgh, Geert Jan; Hawkins, Ed; Suckling, Emma

    2016-04-01

    Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.

  7. Absolute Measurement of the Refractive Index of Water by a Mode-Locked Laser at 518 nm.

    PubMed

    Meng, Zhaopeng; Zhai, Xiaoyu; Wei, Jianguo; Wang, Zhiyang; Wu, Hanzhong

    2018-04-09

    In this paper, we demonstrate a method using a frequency comb, which can precisely measure the refractive index of water. We have developed a simple system, in which a Michelson interferometer is placed into a quartz-glass container with a low expansion coefficient, and for which compensation of the thermal expansion of the water container is not required. By scanning a mirror on a moving stage, a pair of cross-correlation patterns can be generated. We can obtain the length information via these cross-correlation patterns, with or without water in the container. The refractive index of water can be measured by the resulting lengths. Long-term experimental results show that our method can measure the refractive index of water with a high degree of accuracy-measurement uncertainty at 10 -5 level has been achieved, compared with the values calculated by the empirical formula.

  8. Tracer concentration profiles measured in central London as part of the REPARTEE campaign

    NASA Astrophysics Data System (ADS)

    Martin, D.; Petersson, K. F.; White, I. R.; Henshaw, S. J.; Nickless, G.; Lovelock, A.; Barlow, J. F.; Dunbar, T.; Wood, C. R.; Shallcross, D. E.

    2011-01-01

    There have been relatively few tracer experiments carried out that have looked at vertical plume spread in urban areas. In this paper we present results from two tracer (cyclic perfluorocarbon) experiments carried out in 2006 and 2007 in central London centred on the BT Tower as part of the REPARTEE (Regent's Park and Tower Environmental Experiment) campaign. The height of the tower gives a unique opportunity to study vertical dispersion profiles and transport times in central London. Vertical gradients are contrasted with the relevant Pasquill stability classes. Estimation of lateral advection and vertical mixing times are made and compared with previous measurements. Data are then compared with a simple operational dispersion model and contrasted with data taken in central London as part of the DAPPLE campaign. This correlates dosage with non-dimensionalised distance from source. Such analyses illustrate the feasibility of the use of these empirical correlations over these prescribed distances in central London.

  9. Bonding of Alkali-Alkaline Earth Molecules in the Lowest Σ^+ States of Doublet and Quartet Multiplicity

    NASA Astrophysics Data System (ADS)

    Pototschnig, Johann V.; Hauser, Andreas W.; Ernst, Wolfgang E.

    2016-06-01

    n the present study the ground state as well as the lowest ^4Σ^+ state were determined for 16 AK-AKE molecules. Multireference configuration interaction calculations were carried out in order to understand the bonding of diatomic alkali-alkaline earth (AK-AKE) molecules. The correlations between molecular properties (disociation energy, bond distances, electric dipole moment) and atomic properties (electronegativity, polarizability) will be discussed. A correlation between the dissociation energy and the dipole moment of the lowest ^4Σ^+ state was observed, while the dipole moment of the lowest ^2Σ^+ state does not show such a simple dependency. In this case an empirical relation could be established. The class of AK-AKE molecules was selected for this investigation due to their possible applications in ultracold molecular physics. J. V. Pototschnig, A. W. Hauser and W. E. Ernst, Phys. Chem. Chem. Phys., 2016,18, 5964-5973

  10. Absolute Measurement of the Refractive Index of Water by a Mode-Locked Laser at 518 nm

    PubMed Central

    Meng, Zhaopeng; Zhai, Xiaoyu; Wei, Jianguo; Wang, Zhiyang; Wu, Hanzhong

    2018-01-01

    In this paper, we demonstrate a method using a frequency comb, which can precisely measure the refractive index of water. We have developed a simple system, in which a Michelson interferometer is placed into a quartz-glass container with a low expansion coefficient, and for which compensation of the thermal expansion of the water container is not required. By scanning a mirror on a moving stage, a pair of cross-correlation patterns can be generated. We can obtain the length information via these cross-correlation patterns, with or without water in the container. The refractive index of water can be measured by the resulting lengths. Long-term experimental results show that our method can measure the refractive index of water with a high degree of accuracy—measurement uncertainty at 10−5 level has been achieved, compared with the values calculated by the empirical formula. PMID:29642518

  11. Low Speed and High Speed Correlation of SMART Active Flap Rotor Loads

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi B. R.

    2010-01-01

    Measured, open loop and closed loop data from the SMART rotor test in the NASA Ames 40- by 80- Foot Wind Tunnel are compared with CAMRAD II calculations. One open loop high-speed case and four closed loop cases are considered. The closed loop cases include three high-speed cases and one low-speed case. Two of these high-speed cases include a 2 deg flap deflection at 5P case and a test maximum-airspeed case. This study follows a recent, open loop correlation effort that used a simple correction factor for the airfoil pitching moment Mach number. Compared to the earlier effort, the current open loop study considers more fundamental corrections based on advancing blade aerodynamic conditions. The airfoil tables themselves have been studied. Selected modifications to the HH-06 section flap airfoil pitching moment table are implemented. For the closed loop condition, the effect of the flap actuator is modeled by increased flap hinge stiffness. Overall, the open loop correlation is reasonable, thus confirming the basic correctness of the current semi-empirical modifications; the closed loop correlation is also reasonable considering that the current flap model is a first generation model. Detailed correlation results are given in the paper.

  12. An empirical relationship between mesoscale carbon monoxide concentrations and vehicular emission rates : final report.

    DOT National Transportation Integrated Search

    1979-01-01

    Presented is a relatively simple empirical equation that reasonably approximates the relationship between mesoscale carbon monoxide (CO) concentrations, areal vehicular CO emission rates, and the meteorological factors of wind speed and mixing height...

  13. Effect on Non-Uniform Heat Generation on Thermionic Reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schock, Alfred

    The penalty resulting from non-uniform heat generation in a thermionic reactor is examined. Operation at sub-optimum cesium pressure is shown to reduce this penalty, but at the risk of a condition analogous to burnout. For high pressure diodes, a simple empirical correlation between current, voltage and heat flux is developed and used to analyze the performance penalty associated with two different heat flux profiles, for series-and parallel-connected converters. The results demonstrate that series-connected converters require much finer power flattening than parallel converters. For example, a ±10% variation in heat generation across a series array can result in a 25 tomore » 50% power penalty.« less

  14. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties

    PubMed Central

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2013-01-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K). PMID:25685493

  15. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties.

    PubMed

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2014-03-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).

  16. Cascading Walks Model for Human Mobility Patterns

    PubMed Central

    Han, Xiao-Pu; Wang, Xiang-Wen; Yan, Xiao-Yong; Wang, Bing-Hong

    2015-01-01

    Background Uncovering the mechanism behind the scaling laws and series of anomalies in human trajectories is of fundamental significance in understanding many spatio-temporal phenomena. Recently, several models, e.g. the explorations-returns model (Song et al., 2010) and the radiation model for intercity travels (Simini et al., 2012), have been proposed to study the origin of these anomalies and the prediction of human movements. However, an agent-based model that could reproduce most of empirical observations without priori is still lacking. Methodology/Principal Findings In this paper, considering the empirical findings on the correlations of move-lengths and staying time in human trips, we propose a simple model which is mainly based on the cascading processes to capture the human mobility patterns. In this model, each long-range movement activates series of shorter movements that are organized by the law of localized explorations and preferential returns in prescribed region. Conclusions/Significance Based on the numerical simulations and analytical studies, we show more than five statistical characters that are well consistent with the empirical observations, including several types of scaling anomalies and the ultraslow diffusion properties, implying the cascading processes associated with the localized exploration and preferential returns are indeed a key in the understanding of human mobility activities. Moreover, the model shows both of the diverse individual mobility and aggregated scaling displacements, bridging the micro and macro patterns in human mobility. In summary, our model successfully explains most of empirical findings and provides deeper understandings on the emergence of human mobility patterns. PMID:25860140

  17. Cascading walks model for human mobility patterns.

    PubMed

    Han, Xiao-Pu; Wang, Xiang-Wen; Yan, Xiao-Yong; Wang, Bing-Hong

    2015-01-01

    Uncovering the mechanism behind the scaling laws and series of anomalies in human trajectories is of fundamental significance in understanding many spatio-temporal phenomena. Recently, several models, e.g. the explorations-returns model (Song et al., 2010) and the radiation model for intercity travels (Simini et al., 2012), have been proposed to study the origin of these anomalies and the prediction of human movements. However, an agent-based model that could reproduce most of empirical observations without priori is still lacking. In this paper, considering the empirical findings on the correlations of move-lengths and staying time in human trips, we propose a simple model which is mainly based on the cascading processes to capture the human mobility patterns. In this model, each long-range movement activates series of shorter movements that are organized by the law of localized explorations and preferential returns in prescribed region. Based on the numerical simulations and analytical studies, we show more than five statistical characters that are well consistent with the empirical observations, including several types of scaling anomalies and the ultraslow diffusion properties, implying the cascading processes associated with the localized exploration and preferential returns are indeed a key in the understanding of human mobility activities. Moreover, the model shows both of the diverse individual mobility and aggregated scaling displacements, bridging the micro and macro patterns in human mobility. In summary, our model successfully explains most of empirical findings and provides deeper understandings on the emergence of human mobility patterns.

  18. Statistical fluctuations in pedestrian evacuation times and the effect of social contagion

    NASA Astrophysics Data System (ADS)

    Nicolas, Alexandre; Bouzat, Sebastián; Kuperman, Marcelo N.

    2016-08-01

    Mathematical models of pedestrian evacuation and the associated simulation software have become essential tools for the assessment of the safety of public facilities and buildings. While a variety of models is now available, their calibration and test against empirical data are generally restricted to global averaged quantities; the statistics compiled from the time series of individual escapes ("microscopic" statistics) measured in recent experiments are thus overlooked. In the same spirit, much research has primarily focused on the average global evacuation time, whereas the whole distribution of evacuation times over some set of realizations should matter. In the present paper we propose and discuss the validity of a simple relation between this distribution and the microscopic statistics, which is theoretically valid in the absence of correlations. To this purpose, we develop a minimal cellular automaton, with features that afford a semiquantitative reproduction of the experimental microscopic statistics. We then introduce a process of social contagion of impatient behavior in the model and show that the simple relation under test may dramatically fail at high contagion strengths, the latter being responsible for the emergence of strong correlations in the system. We conclude with comments on the potential practical relevance for safety science of calculations based on microscopic statistics.

  19. Path integral for equities: Dynamic correlation and empirical analysis

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Cao, Yang; Lau, Ada; Tang, Pan

    2012-02-01

    This paper develops a model to describe the unequal time correlation between rate of returns of different stocks. A non-trivial fourth order derivative Lagrangian is defined to provide an unequal time propagator, which can be fitted to the market data. A calibration algorithm is designed to find the empirical parameters for this model and different de-noising methods are used to capture the signals concealed in the rate of return. The detailed results of this Gaussian model show that the different stocks can have strong correlation and the empirical unequal time correlator can be described by the model's propagator. This preliminary study provides a novel model for the correlator of different instruments at different times.

  20. Design of exchange-correlation functionals through the correlation factor approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlíková Přecechtělová, Jana, E-mail: j.precechtelova@gmail.com, E-mail: Matthias.Ernzerhof@UMontreal.ca; Institut für Chemie, Theoretische Chemie / Quantenchemie, Sekr. C7, Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin; Bahmann, Hilke

    The correlation factor model is developed in which the spherically averaged exchange-correlation hole of Kohn-Sham theory is factorized into an exchange hole model and a correlation factor. The exchange hole model reproduces the exact exchange energy per particle. The correlation factor is constructed in such a manner that the exchange-correlation energy correctly reduces to exact exchange in the high density and rapidly varying limits. Four different correlation factor models are presented which satisfy varying sets of physical constraints. Three models are free from empirical adjustments to experimental data, while one correlation factor model draws on one empirical parameter. The correlationmore » factor models are derived in detail and the resulting exchange-correlation holes are analyzed. Furthermore, the exchange-correlation energies obtained from the correlation factor models are employed to calculate total energies, atomization energies, and barrier heights. It is shown that accurate, non-empirical functionals can be constructed building on exact exchange. Avenues for further improvements are outlined as well.« less

  1. Fine structure of spectral properties for random correlation matrices: An application to financial markets

    NASA Astrophysics Data System (ADS)

    Livan, Giacomo; Alfarano, Simone; Scalas, Enrico

    2011-07-01

    We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross correlations between stocks. We interpret and corroborate these findings in terms of factor models, and we compare empirical spectra to those predicted by random matrix theory for such models.

  2. EGG: hatching a mock Universe from empirical prescriptions⋆

    NASA Astrophysics Data System (ADS)

    Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; Michałowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.

    2017-06-01

    This paper introduces EGG, the Empirical Galaxy Generator, a tool designed within the ASTRODEEP collaboration to generate mock galaxy catalogs for deep fields with realistic fluxes and simple morphologies. The simulation procedure is based exclusively on empirical prescriptions - rather than first principles - to provide the most accurate match with current observations at 0

  3. Ising model with conserved magnetization on the human connectome: Implications on the relation structure-function in wakefulness and anesthesia

    NASA Astrophysics Data System (ADS)

    Stramaglia, S.; Pellicoro, M.; Angelini, L.; Amico, E.; Aerts, H.; Cortés, J. M.; Laureys, S.; Marinazzo, D.

    2017-04-01

    Dynamical models implemented on the large scale architecture of the human brain may shed light on how a function arises from the underlying structure. This is the case notably for simple abstract models, such as the Ising model. We compare the spin correlations of the Ising model and the empirical functional brain correlations, both at the single link level and at the modular level, and show that their match increases at the modular level in anesthesia, in line with recent results and theories. Moreover, we show that at the peak of the specific heat (the critical state), the spin correlations are minimally shaped by the underlying structural network, explaining how the best match between the structure and function is obtained at the onset of criticality, as previously observed. These findings confirm that brain dynamics under anesthesia shows a departure from criticality and could open the way to novel perspectives when the conserved magnetization is interpreted in terms of a homeostatic principle imposed to neural activity.

  4. The impact of fiscal austerity on suicide: on the empirics of a modern Greek tragedy.

    PubMed

    Antonakakis, Nikolaos; Collins, Alan

    2014-07-01

    Suicide rates in Greece (and other European countries) have been on a remarkable upward trend following the global recession of 2008 and the European sovereign debt crisis of 2009. However, recent investigations of the impact on Greek suicide rates from the 2008 financial crisis have restricted themselves to simple descriptive or correlation analyses. Controlling for various socio-economic effects, this study presents a statistically robust model to explain the influence on realised suicidality of the application of fiscal austerity measures and variations in macroeconomic performance over the period 1968-2011. The responsiveness of suicide to levels of fiscal austerity is established as a means of providing policy guidance on the extent of suicide behaviour associated with different fiscal austerity measures. The results suggest (i) significant age and gender specificity in these effects on suicide rates and that (ii) remittances have suicide-reducing effects on the youth and female population. These empirical regularities potentially offer some guidance on the demographic targeting of suicide prevention measures and the case for 'economic' migration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. AN EMPIRICAL INVESTIGATION OF THE EFFECTS OF NONNORMALITY UPON THE SAMPLING DISTRIBUTION OF THE PROJECT MOMENT CORRELATION COEFFICIENT.

    ERIC Educational Resources Information Center

    HJELM, HOWARD; NORRIS, RAYMOND C.

    THE STUDY EMPIRICALLY DETERMINED THE EFFECTS OF NONNORMALITY UPON SOME SAMPLING DISTRIBUTIONS OF THE PRODUCT MOMENT CORRELATION COEFFICIENT (PMCC). SAMPLING DISTRIBUTIONS OF THE PMCC WERE OBTAINED BY DRAWING NUMEROUS SAMPLES FROM CONTROL AND EXPERIMENTAL POPULATIONS HAVING VARIOUS DEGREES OF NONNORMALITY AND BY CALCULATING CORRELATION COEFFICIENTS…

  6. Using Paperclips to Explain Empirical Formulas to Students

    ERIC Educational Resources Information Center

    Nassiff, Peter; Czerwinski, Wendy A.

    2014-01-01

    Early in their chemistry education, students learn to do empirical formula calculations by rote without an understanding of the historical context behind them or the reason why their calculations work. In these activities, students use paperclip "atoms", construct a series of simple compounds representing real molecules, and discover,…

  7. A Bayes linear Bayes method for estimation of correlated event rates.

    PubMed

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  8. Singlet-triplet splittings from the virial theorem and single-particle excitation energies

    NASA Astrophysics Data System (ADS)

    Becke, Axel D.

    2018-01-01

    The zeroth-order (uncorrelated) singlet-triplet energy difference in single-particle excited configurations is 2Kif, where Kif is the Coulomb self-energy of the product of the transition orbitals. Here we present a non-empirical, virial-theorem argument that the correlated singlet-triplet energy difference should be half of this, namely, Kif. This incredibly simple result gives vertical HOMO-LUMO excitation energies in small-molecule benchmarks as good as the popular TD-B3LYP time-dependent approach to excited states. For linear acenes and nonlinear polycyclic aromatic hydrocarbons, the performance is significantly better than TD-B3LYP. In addition to the virial theorem, the derivation borrows intuitive pair-density concepts from density-functional theory.

  9. Association between volume and momentum of online searches and real-world collective unrest

    NASA Astrophysics Data System (ADS)

    Qi, Hong; Manrique, Pedro; Johnson, Daniela; Restrepo, Elvira; Johnson, Neil F.

    A fundamental idea from physics is that macroscopic transitions can occur as a result of an escalation in the correlated activity of a many-body system's constituent particles. Here we apply this idea in an interdisciplinary setting, whereby the particles are individuals, their correlated activity involves online search activity surrounding the topics of social unrest, and the macroscopic phenomenon being measured are real-world protests. Our empirical study covers countries in Latin America during 2011-2014 using datasets assembled from multiple sources by subject matter experts. We find specifically that the volume and momentum of searches on Google Trends surrounding mass protest language, can detect - and may even pre-empt - the macroscopic on-street activity. Not only can this simple open-source solution prove an invaluable aid for monitoring civil order, our study serves to strengthen the increasing literature in the physics community aimed at understanding the collective dynamics of interacting populations of living objects across the life sciences.

  10. The quotient of normal random variables and application to asset price fat tails

    NASA Astrophysics Data System (ADS)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  11. How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.

    PubMed

    Hittner, James B; May, Kim

    2012-01-01

    The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.

  12. The Empirical Attitude, Material Practice and Design Activities

    ERIC Educational Resources Information Center

    Apedoe, Xornam; Ford, Michael

    2010-01-01

    This article is an argument about something that is both important and severely underemphasized in most current science curricula. The empirical attitude, fundamental to science since Galileo, is a habit of mind that motivates an active search for feedback on our ideas from the material world. Although more simple views of science manifest the…

  13. Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes

    NASA Astrophysics Data System (ADS)

    Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.

    2018-03-01

    A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.

  14. Piaget's epistemic subject and science education: Epistemological vs. psychological issues

    NASA Astrophysics Data System (ADS)

    Kitchener, Richard F.

    1993-06-01

    Many individuals claim that Piaget's theory of cognitive development is empirically false or substantially disconfirmed by empirical research. Although there is substance to such a claim, any such conclusion must address three increasingly problematic issues about the possibility of providing an empirical test of Piaget's genetic epistemology: (1) the empirical underdetermination of theory by empirical evidence, (2) the empirical difficulty of testing competence-type explanations, and (3) the difficulty of empirically testing epistemic norms. This is especially true of a central epistemic construct in Piaget's theory — the epistemic subject. To illustrate how similar problems of empirical testability arise in the physical sciences, I briefly examine the case of Galileo and the correlative difficulty of empirically testing Galileo's laws. I then point out some important epistemological similarities between Galileo and Piaget together with correlative changes needed in science studies methodology. I conclude that many psychologists and science educators have failed to appreciate the difficulty of falsifying Piaget's theory because they have tacitly adopted a philosophy of science at odds with the paradigm-case of Galileo.

  15. β-Lactam antibiotics. Spectroscopy and molecular orbital (MO) calculations . Part I: IR studies of complexation in penicillin-transition metal ion systems and semi-empirical PM3 calculations on simple model compounds

    NASA Astrophysics Data System (ADS)

    Kupka, Teobald

    1997-12-01

    IR studies were preformed to determine possible transition metal ion binding sites of penicillin. the observed changes in spectral position and shape of characteristic IR bands of cloxacillin in the presence of transition metal ions (both in solutions and in the solid state) indicate formation of M-L complexes with engagement of -COO - and/or -CONH- functional groups. The small shift of νCO towards higher frequencies rules out direct M-L interaction via β-lactam carbonyl. PM3 calculations on simple model compounds (substituted formamide, cyclic ketones, lactams and substituted monocyclic β-lactams) have been performed. All structures were fully optimized and the calculated bond lengths, angles, heats of formation and CO stretching frequencies were discussed to determine the β-lactam binding sites and to explain its susceptibility towards nucleophilic attack (hydrolysis in vitro) and biological activity. The relative changes of calculated values were critically compared with available experimental data and same correlation between structural parameters and in vivo activity was shown.

  16. Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) task

    NASA Astrophysics Data System (ADS)

    Halbrügge, Marc

    2010-12-01

    This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.

  17. Learned helplessness and learned prevalence: exploring the causal relations among perceived controllability, reward prevalence, and exploration.

    PubMed

    Teodorescu, Kinneret; Erev, Ido

    2014-10-01

    Exposure to uncontrollable outcomes has been found to trigger learned helplessness, a state in which the agent, because of lack of exploration, fails to take advantage of regained control. Although the implications of this phenomenon have been widely studied, its underlying cause remains undetermined. One can learn not to explore because the environment is uncontrollable, because the average reinforcement for exploring is low, or because rewards for exploring are rare. In the current research, we tested a simple experimental paradigm that contrasts the predictions of these three contributors and offers a unified psychological mechanism that underlies the observed phenomena. Our results demonstrate that learned helplessness is not correlated with either the perceived controllability of one's environment or the average reward, which suggests that reward prevalence is a better predictor of exploratory behavior than the other two factors. A simple computational model in which exploration decisions were based on small samples of past experiences captured the empirical phenomena while also providing a cognitive basis for feelings of uncontrollability. © The Author(s) 2014.

  18. Galaxy–galaxy lensing estimators and their covariance properties

    DOE PAGES

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; ...

    2017-07-21

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less

  19. Modified bathroom scale and balance assessment: a comparison with clinical tests.

    PubMed

    Duchêne, Jacques; Hewson, David; Rumeau, Pierre

    2016-01-01

    Frailty and detection of fall risk are major issues in preventive gerontology. A simple tool frequently used in daily life, a bathroom scale (balance quality tester: BQT), was modified to obtain information on the balance of 84 outpatients consulting at a geriatric clinic. The results computed from the BQT were compared to the values of three geriatric tests that are widely used either to detect a fall risk or frailty (timed get up and go: TUG; 10 m walking speed: WS; walking time: WT; one-leg stand: OS). The BQT calculates four parameters that are then scored and weighted, thus creating an overall indicator of balance quality. Raw data, partial scores and the global score were compared with the results of the three geriatric tests. The WT values had the highest correlation with BQT raw data (r = 0.55), while TUG (r = 0.53) and WS (r = 0.56) had the highest correlation with BQT partial scores. ROC curves for OS cut-off values (4 and 5 s) were produced, with the best results obtained for a 5 s cut-off, both with the partial scores combined using Fisher's combination (specificity 85 %: <0.11, sensitivity 85 %: >0.48), and with the empirical score (specificity 85 %: <7, sensitivity 85 %: >8). A BQT empirical score of less than seven can detect fall risk in a community dwelling population.

  20. Galaxy–galaxy lensing estimators and their covariance properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less

  1. Galaxy-galaxy lensing estimators and their covariance properties

    NASA Astrophysics Data System (ADS)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose

    2017-11-01

    We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  2. Children's Criteria for Representational Adequacy in the Perception of Simple Sonic Stimuli

    ERIC Educational Resources Information Center

    Verschaffel, Lieven; Reybrouck, Mark; Jans, Christine; Van Dooren, Wim

    2010-01-01

    This study investigates children's metarepresentational competence with regard to listening to and making sense of simple sonic stimuli. Using diSessa's (2003) work on metarepresentational competence in mathematics and sciences as theoretical and empirical background, it aims to assess children's criteria for representational adequacy of graphical…

  3. A Simple Estimation Method for Aggregate Government Outsourcing

    ERIC Educational Resources Information Center

    Minicucci, Stephen; Donahue, John D.

    2004-01-01

    The scholarly and popular debate on the delegation to the private sector of governmental tasks rests on an inadequate empirical foundation, as no systematic data are collected on direct versus indirect service delivery. We offer a simple method for approximating levels of service outsourcing, based on relatively straightforward combinations of and…

  4. Towards a universal method for calculating hydration free energies: a 3D reference interaction site model with partial molar volume correction.

    PubMed

    Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V

    2010-12-15

    We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol (- 1) for a test set of 120 organic molecules).

  5. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Worldwide Marine Transportation Network: Efficiency and Container Throughput

    NASA Astrophysics Data System (ADS)

    Deng, Wei-Bing; Guo, Long; Li, Wei; Cai, Xu

    2009-11-01

    Through empirical analysis of the global structure of the Worldwide Marine Transportation Network (WMTN), we find that the WMTN, a small-world network, exhibits an exponential-like degree distribution. We hereby investigate the efficiency of the WMTN by employing a simple definition. Compared with many other transportation networks, the WMTN possesses relatively low efficiency. Furthermore, by exploring the relationship between the topological structure and the container throughput, we find that strong correlations exist among the container throughout the degree and the clustering coefficient. Also, considering the navigational process that a ship travels in a real shipping line, we obtain that the weight of a seaport is proportional to the total probability contributed by all the passing shipping lines.

  6. The dynamics of correlated novelties.

    PubMed

    Tria, F; Loreto, V; Servedio, V D P; Strogatz, S H

    2014-07-31

    Novelties are a familiar part of daily life. They are also fundamental to the evolution of biological systems, human society, and technology. By opening new possibilities, one novelty can pave the way for others in a process that Kauffman has called "expanding the adjacent possible". The dynamics of correlated novelties, however, have yet to be quantified empirically or modeled mathematically. Here we propose a simple mathematical model that mimics the process of exploring a physical, biological, or conceptual space that enlarges whenever a novelty occurs. The model, a generalization of Polya's urn, predicts statistical laws for the rate at which novelties happen (Heaps' law) and for the probability distribution on the space explored (Zipf's law), as well as signatures of the process by which one novelty sets the stage for another. We test these predictions on four data sets of human activity: the edit events of Wikipedia pages, the emergence of tags in annotation systems, the sequence of words in texts, and listening to new songs in online music catalogues. By quantifying the dynamics of correlated novelties, our results provide a starting point for a deeper understanding of the adjacent possible and its role in biological, cultural, and technological evolution.

  7. MEASUREMENT OF WIND SPEED FROM COOLING LAKE THERMAL IMAGERY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, A; Robert Kurzeja, R; Eliel Villa-Aleman, E

    2009-01-20

    The Savannah River National Laboratory (SRNL) collected thermal imagery and ground truth data at two commercial power plant cooling lakes to investigate the applicability of laboratory empirical correlations between surface heat flux and wind speed, and statistics derived from thermal imagery. SRNL demonstrated in a previous paper [1] that a linear relationship exists between the standard deviation of image temperature and surface heat flux. In this paper, SRNL will show that the skewness of the temperature distribution derived from cooling lake thermal images correlates with instantaneous wind speed measured at the same location. SRNL collected thermal imagery, surface meteorology andmore » water temperatures from helicopters and boats at the Comanche Peak and H. B. Robinson nuclear power plant cooling lakes. SRNL found that decreasing skewness correlated with increasing wind speed, as was the case for the laboratory experiments. Simple linear and orthogonal regression models both explained about 50% of the variance in the skewness - wind speed plots. A nonlinear (logistic) regression model produced a better fit to the data, apparently because the thermal convection and resulting skewness are related to wind speed in a highly nonlinear way in nearly calm and in windy conditions.« less

  8. The dynamics of correlated novelties

    NASA Astrophysics Data System (ADS)

    Tria, F.; Loreto, V.; Servedio, V. D. P.; Strogatz, S. H.

    2014-07-01

    Novelties are a familiar part of daily life. They are also fundamental to the evolution of biological systems, human society, and technology. By opening new possibilities, one novelty can pave the way for others in a process that Kauffman has called ``expanding the adjacent possible''. The dynamics of correlated novelties, however, have yet to be quantified empirically or modeled mathematically. Here we propose a simple mathematical model that mimics the process of exploring a physical, biological, or conceptual space that enlarges whenever a novelty occurs. The model, a generalization of Polya's urn, predicts statistical laws for the rate at which novelties happen (Heaps' law) and for the probability distribution on the space explored (Zipf's law), as well as signatures of the process by which one novelty sets the stage for another. We test these predictions on four data sets of human activity: the edit events of Wikipedia pages, the emergence of tags in annotation systems, the sequence of words in texts, and listening to new songs in online music catalogues. By quantifying the dynamics of correlated novelties, our results provide a starting point for a deeper understanding of the adjacent possible and its role in biological, cultural, and technological evolution.

  9. The dynamics of correlated novelties

    PubMed Central

    Tria, F.; Loreto, V.; Servedio, V. D. P.; Strogatz, S. H.

    2014-01-01

    Novelties are a familiar part of daily life. They are also fundamental to the evolution of biological systems, human society, and technology. By opening new possibilities, one novelty can pave the way for others in a process that Kauffman has called “expanding the adjacent possible”. The dynamics of correlated novelties, however, have yet to be quantified empirically or modeled mathematically. Here we propose a simple mathematical model that mimics the process of exploring a physical, biological, or conceptual space that enlarges whenever a novelty occurs. The model, a generalization of Polya's urn, predicts statistical laws for the rate at which novelties happen (Heaps' law) and for the probability distribution on the space explored (Zipf's law), as well as signatures of the process by which one novelty sets the stage for another. We test these predictions on four data sets of human activity: the edit events of Wikipedia pages, the emergence of tags in annotation systems, the sequence of words in texts, and listening to new songs in online music catalogues. By quantifying the dynamics of correlated novelties, our results provide a starting point for a deeper understanding of the adjacent possible and its role in biological, cultural, and technological evolution. PMID:25080941

  10. Empirical Bayes Estimation of Semi-parametric Hierarchical Mixture Models for Unbiased Characterization of Polygenic Disease Architectures

    PubMed Central

    Nishino, Jo; Kochi, Yuta; Shigemizu, Daichi; Kato, Mamoru; Ikari, Katsunori; Ochi, Hidenori; Noma, Hisashi; Matsui, Kota; Morizono, Takashi; Boroevich, Keith A.; Tsunoda, Tatsuhiko; Matsui, Shigeyuki

    2018-01-01

    Genome-wide association studies (GWAS) suggest that the genetic architecture of complex diseases consists of unexpectedly numerous variants with small effect sizes. However, the polygenic architectures of many diseases have not been well characterized due to lack of simple and fast methods for unbiased estimation of the underlying proportion of disease-associated variants and their effect-size distribution. Applying empirical Bayes estimation of semi-parametric hierarchical mixture models to GWAS summary statistics, we confirmed that schizophrenia was extremely polygenic [~40% of independent genome-wide SNPs are risk variants, most within odds ratio (OR = 1.03)], whereas rheumatoid arthritis was less polygenic (~4 to 8% risk variants, significant portion reaching OR = 1.05 to 1.1). For rheumatoid arthritis, stratified estimations revealed that expression quantitative loci in blood explained large genetic variance, and low- and high-frequency derived alleles were prone to be risk and protective, respectively, suggesting a predominance of deleterious-risk and advantageous-protective mutations. Despite genetic correlation, effect-size distributions for schizophrenia and bipolar disorder differed across allele frequency. These analyses distinguished disease polygenic architectures and provided clues for etiological differences in complex diseases. PMID:29740473

  11. Mongolians core gut microbiota and its correlation with seasonal dietary changes.

    PubMed

    Zhang, Jiachao; Guo, Zhuang; Lim, Angela An Qi; Zheng, Yi; Koh, Eileen Y; Ho, Danliang; Qiao, Jianmin; Huo, Dongxue; Hou, Qiangchuan; Huang, Weiqiang; Wang, Lifeng; Javzandulam, Chimedsuren; Narangerel, Choijilsuren; Jirimutu; Menghebilige; Lee, Yuan-Kun; Zhang, Heping

    2014-05-16

    Historically, the Mongol Empire ranks among the world's largest contiguous empires, and the Mongolians developed their unique lifestyle and diet over thousands of years. In this study, the intestinal microbiota of Mongolians residing in Ulan Bator, TUW province and the Khentii pasturing area were studied using 454 pyrosequencing and q-PCR technology. We explored the impacts of lifestyle and seasonal dietary changes on the Mongolians' gut microbes. At the phylum level, the Mongolians's gut populations were marked by a dominance of Bacteroidetes (55.56%) and a low Firmicutes to Bacteroidetes ratio (0.71). Analysis based on the operational taxonomic unit (OTU) level revealed that the Mongolian core intestinal microbiota comprised the genera Prevotella, Bacteroides, Faecalibacterium, Ruminococcus, Subdoligranulum and Coprococcus. Urbanisation and life-style may have modified the compositions of the gut microbiota of Mongolians from Ulan Bator, TUW and Khentii. Based on a food frequency questionnaire, we found that the dietary structure was diverse and stable throughout the year in Ulan Bator and TUW, but was simple and varied during the year in Khentii. Accordingly, seasonal effects on intestinal microbiota were more distinct in Khentii residents than in TUW or Ulan Bator residents.

  12. This Ad is for You: Targeting and the Effect of Alcohol Advertising on Youth Drinking.

    PubMed

    Molloy, Eamon

    2016-02-01

    Endogenous targeting of alcohol advertisements presents a challenge for empirically identifying a causal effect of advertising on drinking. Drinkers prefer a particular media; firms recognize this and target alcohol advertising at these media. This paper overcomes this challenge by utilizing novel data with detailed individual measures of media viewing and alcohol consumption and three separate empirical techniques, which represent significant improvements over previous methods. First, controls for the average audience characteristics of the media an individual views account for attributes of magazines and television programs alcohol firms may consider when deciding where to target advertising. A second specification directly controls for each television program and magazine a person views. The third method exploits variation in advertising exposure due to a 2003 change in an industry-wide rule that governs where firms may advertise. Although the unconditional correlation between advertising and drinking by youth (ages 18-24) is strong, models that include simple controls for targeting imply, at most, a modest advertising effect. Although the coefficients are estimated less precisely, estimates with models including more rigorous controls for targeting indicate no significant effect of advertising on youth drinking. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Inferring the microscopic surface energy of protein-protein interfaces from mutation data.

    PubMed

    Moal, Iain H; Dapkūnas, Justas; Fernández-Recio, Juan

    2015-04-01

    Mutations at protein-protein recognition sites alter binding strength by altering the chemical nature of the interacting surfaces. We present a simple surface energy model, parameterized with empirical ΔΔG values, yielding mean energies of -48 cal mol(-1) Å(-2) for interactions between hydrophobic surfaces, -51 to -80 cal mol(-1) Å(-2) for surfaces of complementary charge, and 66-83 cal mol(-1) Å(-2) for electrostatically repelling surfaces, relative to the aqueous phase. This places the mean energy of hydrophobic surface burial at -24 cal mol(-1) Å(-2) . Despite neglecting configurational entropy and intramolecular changes, the model correlates with empirical binding free energies of a functionally diverse set of rigid-body interactions (r = 0.66). When used to rerank docking poses, it can place near-native solutions in the top 10 for 37% of the complexes evaluated, and 82% in the top 100. The method shows that hydrophobic burial is the driving force for protein association, accounting for 50-95% of the cohesive energy. The model is available open-source from http://life.bsc.es/pid/web/surface_energy/ and via the CCharpPPI web server http://life.bsc.es/pid/ccharppi/. © 2015 Wiley Periodicals, Inc.

  14. Mongolians core gut microbiota and its correlation with seasonal dietary changes

    PubMed Central

    Zhang, Jiachao; Guo, Zhuang; Lim, Angela An Qi; Zheng, Yi; Koh, Eileen Y.; Ho, Danliang; Qiao, Jianmin; Huo, Dongxue; Hou, Qiangchuan; Huang, Weiqiang; Wang, Lifeng; Javzandulam, Chimedsuren; Narangerel, Choijilsuren; Jirimutu; Menghebilige; Lee, Yuan-Kun; Zhang, Heping

    2014-01-01

    Historically, the Mongol Empire ranks among the world's largest contiguous empires, and the Mongolians developed their unique lifestyle and diet over thousands of years. In this study, the intestinal microbiota of Mongolians residing in Ulan Bator, TUW province and the Khentii pasturing area were studied using 454 pyrosequencing and q-PCR technology. We explored the impacts of lifestyle and seasonal dietary changes on the Mongolians' gut microbes. At the phylum level, the Mongolians's gut populations were marked by a dominance of Bacteroidetes (55.56%) and a low Firmicutes to Bacteroidetes ratio (0.71). Analysis based on the operational taxonomic unit (OTU) level revealed that the Mongolian core intestinal microbiota comprised the genera Prevotella, Bacteroides, Faecalibacterium, Ruminococcus, Subdoligranulum and Coprococcus. Urbanisation and life-style may have modified the compositions of the gut microbiota of Mongolians from Ulan Bator, TUW and Khentii. Based on a food frequency questionnaire, we found that the dietary structure was diverse and stable throughout the year in Ulan Bator and TUW, but was simple and varied during the year in Khentii. Accordingly, seasonal effects on intestinal microbiota were more distinct in Khentii residents than in TUW or Ulan Bator residents. PMID:24833488

  15. A consistent hierarchy of generalized kinetic equation approximations to the master equation applied to surface catalysis.

    PubMed

    Herschlag, Gregory J; Mitran, Sorin; Lin, Guang

    2015-06-21

    We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite-range spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one- and two-dimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods (KMC) for one-dimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a two-dimensional model for the oxidation of CO on RuO2(110), showing that low-order truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.

  16. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    PubMed

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  17. Methodology for the study of the boiling crisis in a nuclear fuel bundle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crecy, F. de; Juhel, D.

    1995-09-01

    The boiling crisis is one of the phenoumena limiting the available power from a nuclear power plant. It has been widely studied for decades, and numerous data, models, correlations or tables are now available in the literature. If we now try to obtain a general view of previous work in this field, we may note that there are several ways of tackling the subject. The mechanistic models try to model the two-phase flow topology and the interaction between different sublayers, and must be validated by comparison with basic experiments, such as DEBORA, where we try to obtain some detailed informationsmore » on the two-phase flow pattern in a pure and simple geometry. This allows us to obtain better knowledge of the so-called {open_quotes}intrinsic effect{close_quotes}. These models are not yet acceptable for nuclear use. As the geometry of the rod bundles and grids has a tremendous importance for the Critical Heat Flux (CHF), it is mandatory to have more precise results for a given fuel rod bundle in a restricted range of parameters: this leads to the empirical approach, using empirical CHF predictors (tables, correlations, splines, etc...). One of the key points of such a method is the obtaining local thermohydraulic values, that is to say the evaluation of the so-called {open_quotes}mixing effect{close_quotes}. This is done by a subchannel analysis code or equivalent, which can be qualified on two kinds of experiments: overall flow measurements in a subchannel, such as HYDROMEL in single-phase flow or GRAZIELLA in two-phase flow, or detailed measurements inside a subchannel, such as AGATE. Nevertheless, the final qualification of a specific nuclear fuel, i.e. the synthesis of these mechanistic and empirical approaches, intrinsic and mixing effects, etc..., must be achieved on a global test such as OMEGA. This is the strategy used in France by CEA and its partners FRAMATOME and EdF.« less

  18. Comparison of all atom, continuum, and linear fitting empirical models for charge screening effect of aqueous medium surrounding a protein molecule

    NASA Astrophysics Data System (ADS)

    Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki

    2002-05-01

    To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.

  19. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  20. Complex dynamics and empirical evidence (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Delli Gatti, Domenico; Gaffeo, Edoardo; Giulioni, Gianfranco; Gallegati, Mauro; Kirman, Alan; Palestrini, Antonio; Russo, Alberto

    2005-05-01

    Standard macroeconomics, based on a reductionist approach centered on the representative agent, is badly equipped to explain the empirical evidence where heterogeneity and industrial dynamics are the rule. In this paper we show that a simple agent-based model of heterogeneous financially fragile agents is able to replicate a large number of scaling type stylized facts with a remarkable degree of statistical precision.

  1. Time Preferences, Mental Health and Treatment Utilization.

    PubMed

    Eisenberg, Daniel; Druss, Benjamin G

    2015-09-01

    In all countries of the world, fewer than half of people with mental disorders receive treatment. This treatment gap is commonly attributed to factors such as consumers' limited knowledge, negative attitudes, and financial constraints. In the context of other health behaviors, such as diet and exercise, behavioral economists have emphasized time preferences and procrastination as additional barriers. These factors might also be relevant to mental health. We examine conceptually and empirically how lack of help-seeking for mental health conditions might be related to time preferences and procrastination. Our conceptual discussion explores how the interrelationships between time preferences and mental health treatment utilization could fit into basic microeconomic theory. The empirical analysis uses survey data of student populations from 12 colleges and universities in 2011 (the Healthy Minds Study, N=8,806). Using standard brief measures of discounting, procrastination, and mental health (depression and anxiety symptoms), we examine the conditional correlations between indicators of present-orientation (discount rate and procrastination) and mental health symptoms. The conceptual discussion reveals a number of potential relationships that would be useful to examine empirically. In the empirical analysis depression is significantly associated with procrastination and discounting. Treatment utilization is significantly associated with procrastination but not discounting. The empirical results are generally consistent with the idea that depression increases present orientation (reduces future orientation), as measured by discounting and procrastination. These analyses have notable limitations that will require further examination in future research: the measures are simple and brief, and the estimates may be biased from true causal effects because of omitted variables and reverse causality. There are several possibilities for future research, including: (i) observational, longitudinal studies with detailed data on mental health, time preferences, and help-seeking; (ii) experimental studies that examine immediate or short-term responses and connections between these variables; (iii) randomized trials of mental health therapies that include outcome measures of time preferences and procrastination; and, (iv) intervention studies that test strategies to influence help-seeking by addressing time preferences and present orientation.

  2. Routine OGTT: a robust model including incretin effect for precise identification of insulin sensitivity and secretion in a single individual.

    PubMed

    De Gaetano, Andrea; Panunzi, Simona; Matone, Alice; Samson, Adeline; Vrbikova, Jana; Bendlova, Bela; Pacini, Giovanni

    2013-01-01

    In order to provide a method for precise identification of insulin sensitivity from clinical Oral Glucose Tolerance Test (OGTT) observations, a relatively simple mathematical model (Simple Interdependent glucose/insulin MOdel SIMO) for the OGTT, which coherently incorporates commonly accepted physiological assumptions (incretin effect and saturating glucose-driven insulin secretion) has been developed. OGTT data from 78 patients in five different glucose tolerance groups were analyzed: normal glucose tolerance (NGT), impaired glucose tolerance (IGT), impaired fasting glucose (IFG), IFG+IGT, and Type 2 Diabetes Mellitus (T2DM). A comparison with the 2011 Salinari (COntinuos GI tract MOdel, COMO) and the 2002 Dalla Man (Dalla Man MOdel, DMMO) models was made with particular attention to insulin sensitivity indices ISCOMO, ISDMMO and kxgi (the insulin sensitivity index for SIMO). ANOVA on kxgi values across groups resulted significant overall (P<0.001), and post-hoc comparisons highlighted the presence of three different groups: NGT (8.62×10(-5)±9.36×10(-5) min(-1)pM(-1)), IFG (5.30×10(-5)±5.18×10(-5)) and combined IGT, IFG+IGT and T2DM (2.09×10(-5)±1.95×10(-5), 2.38×10(-5)±2.28×10(-5) and 2.38×10(-5)±2.09×10(-5) respectively). No significance was obtained when comparing ISCOMO or ISDMMO across groups. Moreover, kxgi presented the lowest sample average coefficient of variation over the five groups (25.43%), with average CVs for ISCOMO and ISDMMO of 70.32% and 57.75% respectively; kxgi also presented the strongest correlations with all considered empirical measures of insulin sensitivity. While COMO and DMMO appear over-parameterized for fitting single-subject clinical OGTT data, SIMO provides a robust, precise, physiologically plausible estimate of insulin sensitivity, with which habitual empirical insulin sensitivity indices correlate well. The kxgi index, reflecting insulin secretion dependency on glycemia, also significantly differentiates clinically diverse subject groups. The SIMO model may therefore be of value for the quantification of glucose homeostasis from clinical OGTT data.

  3. Deducing Electronic Unit Internal Response During a Vibration Test Using a Lumped Parameter Modeling Approach

    NASA Technical Reports Server (NTRS)

    Van Dyke, Michael B.

    2014-01-01

    During random vibration testing of electronic boxes there is often a desire to know the dynamic response of certain internal printed wiring boards (PWBs) for the purpose of monitoring the response of sensitive hardware or for post-test forensic analysis in support of anomaly investigation. Due to restrictions on internally mounted accelerometers for most flight hardware there is usually no means to empirically observe the internal dynamics of the unit, so one must resort to crude and highly uncertain approximations. One common practice is to apply Miles Equation, which does not account for the coupled response of the board in the chassis, resulting in significant over- or under-prediction. This paper explores the application of simple multiple-degree-of-freedom lumped parameter modeling to predict the coupled random vibration response of the PWBs in their fundamental modes of vibration. A simple tool using this approach could be used during or following a random vibration test to interpret vibration test data from a single external chassis measurement to deduce internal board dynamics by means of a rapid correlation analysis. Such a tool might also be useful in early design stages as a supplemental analysis to a more detailed finite element analysis to quickly prototype and analyze the dynamics of various design iterations. After developing the theoretical basis, a lumped parameter modeling approach is applied to an electronic unit for which both external and internal test vibration response measurements are available for direct comparison. Reasonable correlation of the results demonstrates the potential viability of such an approach. Further development of the preliminary approach presented in this paper will involve correlation with detailed finite element models and additional relevant test data.

  4. The Philosophy, Theoretical Bases, and Implementation of the AHAAH Model for Evaluation of Hazard from Exposure to Intense Sounds

    DTIC Science & Technology

    2018-04-01

    empirical, external energy-damage correlation methods for evaluating hearing damage risk associated with impulsive noise exposure. AHAAH applies the...is validated against the measured results of human exposures to impulsive sounds, and unlike wholly empirical correlation approaches, AHAAH’s...a measured level (LAEQ8 of 85 dB). The approach in MIL-STD-1474E is very different. Previous standards tried to find a correlation between some

  5. Empirical likelihood-based tests for stochastic ordering

    PubMed Central

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  6. Cultural Validity of the Minnesota Multiphasic Personality Inventory-2 Empirical Correlates: Is This the Best We Can Do?

    ERIC Educational Resources Information Center

    Hill, Jill S.; Robbins, Rockey R.; Pace, Terry M.

    2012-01-01

    This article critically reviews empirical correlates of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989), based on several validation studies conducted with different racial, ethnic, and cultural groups. A major critique of the reviewed MMPI-2 studies was focused on the use of…

  7. A simple approximation for the current-voltage characteristics of high-power, relativistic diodes

    DOE PAGES

    Ekdahl, Carl

    2016-06-10

    A simple approximation for the current-voltage characteristics of a relativistic electron diode is presented. The approximation is accurate from non-relativistic through relativistic electron energies. Although it is empirically developed, it has many of the fundamental properties of the exact diode solutions. Lastly, the approximation is simple enough to be remembered and worked on almost any pocket calculator, so it has proven to be quite useful on the laboratory floor.

  8. Influence of stochastic geometric imperfections on the load-carrying behaviour of thin-walled structures using constrained random fields

    NASA Astrophysics Data System (ADS)

    Lauterbach, S.; Fina, M.; Wagner, W.

    2018-04-01

    Since structural engineering requires highly developed and optimized structures, the thickness dependency is one of the most controversially debated topics. This paper deals with stability analysis of lightweight thin structures combined with arbitrary geometrical imperfections. Generally known design guidelines only consider imperfections for simple shapes and loading, whereas for complex structures the lower-bound design philosophy still holds. Herein, uncertainties are considered with an empirical knockdown factor representing a lower bound of existing measurements. To fully understand and predict expected bearable loads, numerical investigations are essential, including geometrical imperfections. These are implemented into a stand-alone program code with a stochastic approach to compute random fields as geometric imperfections that are applied to nodes of the finite element mesh of selected structural examples. The stochastic approach uses the Karhunen-Loève expansion for the random field discretization. For this approach, the so-called correlation length l_c controls the random field in a powerful way. This parameter has a major influence on the buckling shape, and also on the stability load. First, the impact of the correlation length is studied for simple structures. Second, since most structures for engineering devices are more complex and combined structures, these are intensively discussed with the focus on constrained random fields for e.g. flange-web-intersections. Specific constraints for those random fields are pointed out with regard to the finite element model. Further, geometrical imperfections vanish where the structure is supported.

  9. A Simple Computer-Aided Three-Dimensional Molecular Modeling for the Octant Rule

    ERIC Educational Resources Information Center

    Kang, Yinan; Kang, Fu-An

    2011-01-01

    The Moffitt-Woodward-Moscowitz-Klyne-Djerassi octant rule is one of the most successful empirical rules in organic chemistry. However, the lack of a simple effective modeling method for the octant rule in the past 50 years has posed constant difficulties for researchers, teachers, and students, particularly the young generations, to learn and…

  10. A Simple and Effective Program to Increase Faculty Knowledge of and Referrals to Counseling Centers

    ERIC Educational Resources Information Center

    Nolan, Susan A.; Pace, Kristi A.; Iannelli, Richard J.; Palma, Thomas V.; Pakalns, Gail P.

    2006-01-01

    The authors describe a simple, cost-effective, and empirically supported program to increase faculty referrals of students to counseling centers (CCs). Incoming faculty members at 3 universities received a mailing and personal telephone call from a CC staff member. Faculty assigned to the outreach program had greater knowledge of and rates of…

  11. Essays on pricing electricity and electricity derivatives in deregulated markets

    NASA Astrophysics Data System (ADS)

    Popova, Julia

    2008-10-01

    This dissertation is composed of four essays on the behavior of wholesale electricity prices and their derivatives. The first essay provides an empirical model that takes into account the spatial features of a transmission network on the electricity market. The spatial structure of the transmission grid plays a key role in determining electricity prices, but it has not been incorporated into previous empirical models. The econometric model in this essay incorporates a simple representation of the transmission system into a spatial panel data model of electricity prices, and also accounts for the effect of dynamic transmission system constraints on electricity market integration. Empirical results using PJM data confirm the existence of spatial patterns in electricity prices and show that spatial correlation diminishes as transmission lines become more congested. The second essay develops and empirically tests a model of the influence of natural gas storage inventories on the electricity forward premium. I link a model of the effect of gas storage constraints on the higher moments of the distribution of electricity prices to a model of the effect of those moments on the forward premium. Empirical results using PJM data support the model's predictions that gas storage inventories sharply reduce the electricity forward premium when demand for electricity is high and space-heating demand for gas is low. The third essay examines the efficiency of PJM electricity markets. A market is efficient if prices reflect all relevant information, so that prices follow a random walk. The hypothesis of random walk is examined using empirical tests, including the Portmanteau, Augmented Dickey-Fuller, KPSS, and multiple variance ratio tests. The results are mixed though evidence of some level of market efficiency is found. The last essay investigates the possibility that previous researchers have drawn spurious conclusions based on classical unit root tests incorrectly applied to wholesale electricity prices. It is well known that electricity prices exhibit both cyclicity and high volatility which varies through time. Results indicate that heterogeneity in unconditional variance---which is not detected by classical unit root tests---may contribute to the appearance of non-stationarity.

  12. Simple growth patterns can create complex trajectories for the ontogeny of constitutive chemical defences in seaweeds.

    PubMed

    Paul, Nicholas A; Svensson, Carl Johan; de Nys, Rocky; Steinberg, Peter D

    2014-01-01

    All of the theory and most of the data on the ecology and evolution of chemical defences derive from terrestrial plants, which have considerable capacity for internal movement of resources. In contrast, most macroalgae--seaweeds--have no or very limited capacity for resource translocation, meaning that trade-offs between growth and defence, for example, should be localised rather than systemic. This may change the predictions of chemical defence theories for seaweeds. We developed a model that mimicked the simple growth pattern of the red seaweed Asparagopsis armata which is composed of repeating clusters of somatic cells and cells which contain deterrent secondary chemicals (gland cells). To do this we created a distinct growth curve for the somatic cells and another for the gland cells using empirical data. The somatic growth function was linked to the growth function for defence via differential equations modelling, which effectively generated a trade-off between growth and defence as these neighbouring cells develop. By treating growth and defence as separate functions we were also able to model a trade-off in growth of 2-3% under most circumstances. However, we found contrasting evidence for this trade-off in the empirical relationships between growth and defence, depending on the light level under which the alga was cultured. After developing a model that incorporated both branching and cell division rates, we formally demonstrated that positive correlations between growth and defence are predicted in many circumstances and also that allocation costs, if they exist, will be constrained by the intrinsic growth patterns of the seaweed. Growth patterns could therefore explain contrasting evidence for cost of constitutive chemical defence in many studies, highlighting the need to consider the fundamental biology and ontogeny of organisms when assessing the allocation theories for defence.

  13. Transport Properties of Complex Oxides: New Ideas and Insights from Theory and Simulation

    NASA Astrophysics Data System (ADS)

    Benedek, Nicole

    Complex oxides are one of the largest and most technologically important materials families. The ABO3 perovskite oxides in particular display an unparalleled variety of physical properties. The microscopic origin of these properties (how they arise from the structure of the material) is often complicated, but in many systems previous research has identified simple guidelines or `rules of thumb' that link structure and chemistry to the physics of interest. For example, the tolerance factor is a simple empirical measure that relates the composition of a perovskite to its tendency to adopt a distorted structure. First-principles calculations have shown that the tendency towards ferroelectricity increases systematically as the tolerance factor of the perovskite decreases. Can we uncover a similar set of simple guidelines to yield new insights into the ionic and thermal transport properties of perovskites? I will discuss recent research from my group on the link between crystal structure and chemistry, soft phonons and ionic transport in a family of layered perovskite oxides, the Ln2NiO4+δ Ruddlesden-Popper phases. In particular, we show how the lattice dynamical properties of these materials (their tendency to undergo certain structural distortions) can be correlated with oxide ion transport properties. Ultimately, we seek new ways to understand the microscopic origins of complex transport processes and to develop first-principles-based design rules for new materials based on our understanding.

  14. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    NASA Astrophysics Data System (ADS)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.


  15. SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.

    USGS Publications Warehouse

    Mueller, Charles S.

    1985-01-01

    Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.

  16. Testing a single regression coefficient in high dimensional linear models

    PubMed Central

    Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2017-01-01

    In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668

  17. Testing a single regression coefficient in high dimensional linear models.

    PubMed

    Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2016-11-01

    In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.

  18. Colloid Transport in Saturated Porous Media: Elimination of Attachment Efficiency in a New Colloid Transport Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.

    A new colloid transport model is introduced that is conceptually simple but captures the essential features of complicated attachment and detachment behavior of colloids when conditions of secondary minimum attachment exist. This model eliminates the empirical concept of collision efficiency; the attachment rate is computed directly from colloid filtration theory. Also, a new paradigm for colloid detachment based on colloid population heterogeneity is introduced. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of colloids that attach irreversibly and (2) the rate at which reversibly attached colloids leave themore » surface. These two parameters were correlated to physical parameters that control colloid transport such as the depth of the secondary minimum and pore water velocity. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport. This model can be extended to heterogeneous systems characterized by both primary and secondary minimum deposition by simply increasing the fraction of colloids that attach irreversibly.« less

  19. A simple model for the dependence on local detonation speed of the product entropy

    NASA Astrophysics Data System (ADS)

    Hetherington, David C.; Whitworth, Nicholas J.

    2012-03-01

    The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of singlespeed programmed burn to DSD/WBL (Detonation Shock Dynamics / Whitham Bdzil Lambourn). The problem with this advance is that the previously conventional approach to the hydrodynamic stage of the model results in the entropy of the detonation products (s) having the wrong correlation with detonation speed (D). Instead of being higher where D is lower, the conventional method leads to s being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and s is realistically correlated with D.

  20. A Simple Model for the Dependence on Local Detonation Speed (D) of the Product Entropy (S)

    NASA Astrophysics Data System (ADS)

    Hetherington, David

    2011-06-01

    The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of single-speed programmed burn to DSD. However, with this advance has come the problem that the previously conventional approach to the hydrodynamic stage of the model results in S having the wrong correlation with D. Instead of being higher where the detonation speed is lower, i.e. where reaction occurs at lower compression, the conventional method leads to S being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and S is realistically correlated with D.

  1. An Empirical Correction Method for Improving off-Axes Response Prediction in Component Type Flight Mechanics Helicopter Models

    NASA Technical Reports Server (NTRS)

    Mansur, M. Hossein; Tischler, Mark B.

    1997-01-01

    Historically, component-type flight mechanics simulation models of helicopters have been unable to satisfactorily predict the roll response to pitch stick input and the pitch response to roll stick input off-axes responses. In the study presented here, simple first-order low-pass filtering of the elemental lift and drag forces was considered as a means of improving the correlation. The method was applied to a blade-element model of the AH-64 APache, and responses of the modified model were compared with flight data in hover and forward flight. Results indicate that significant improvement in the off-axes responses can be achieved in hover. In forward flight, however, the best correlation in the longitudinal and lateral off-axes responses required different values of the filter time constant for each axis. A compromise value was selected and was shown to result in good overall improvement in the off-axes responses. The paper describes both the method and the model used for its implementation, and presents results obtained at hover and in forward flight.

  2. GNSS Signal Authentication Via Power and Distortion Monitoring

    NASA Astrophysics Data System (ADS)

    Wesson, Kyle D.; Gross, Jason N.; Humphreys, Todd E.; Evans, Brian L.

    2018-04-01

    We propose a simple low-cost technique that enables civil Global Positioning System (GPS) receivers and other civil global navigation satellite system (GNSS) receivers to reliably detect carry-off spoofing and jamming. The technique, which we call the Power-Distortion detector, classifies received signals as interference-free, multipath-afflicted, spoofed, or jammed according to observations of received power and correlation function distortion. It does not depend on external hardware or a network connection and can be readily implemented on many receivers via a firmware update. Crucially, the detector can with high probability distinguish low-power spoofing from ordinary multipath. In testing against over 25 high-quality empirical data sets yielding over 900,000 separate detection tests, the detector correctly alarms on all malicious spoofing or jamming attacks while maintaining a <0.6% single-channel false alarm rate.

  3. Development of a scaled-down aerobic fermentation model for scale-up in recombinant protein vaccine manufacturing.

    PubMed

    Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony

    2012-08-17

    A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Potential energy surfaces and reaction dynamics of polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Yan-Tyng

    A simple empirical valence bond (EVB) model approach is suggested for constructing global potential energy surfaces for reactions of polyatomic molecular systems. This approach produces smooth and continuous potential surfaces which can be directly utilized in a dynamical study. Two types of reactions are of special interest, the unimolecular dissociation and the unimolecular isomerization. For the first type, the molecular dissociation dynamics of formaldehyde on the ground electronic surface is investigated through classical trajectory calculations on EVB surfaces. The product state distributions and vector correlations obtained from this study suggest very similar behaviors seen in the experiments. The intramolecular hydrogenmore » atom transfer in the formic acid dimer is an example of the isomerization reaction. High level ab initio quantum chemistry calculations are performed to obtain optimized equilibrium and transition state dimer geometries and also the harmonic frequencies.« less

  5. The study and development of the empirical correlations equation of natural convection heat transfer on vertical rectangular sub-channels

    NASA Astrophysics Data System (ADS)

    Kamajaya, Ketut; Umar, Efrizon; Sudjatmi, K. S.

    2012-06-01

    This study focused on natural convection heat transfer using a vertical rectangular sub-channel and water as the coolant fluid. To conduct this study has been made pipe heaters are equipped with thermocouples. Each heater is equipped with five thermocouples along the heating pipes. The diameter of each heater is 2.54 cm and 45 cm in length. The distance between the central heating and the pitch is 29.5 cm. Test equipment is equipped with a primary cooling system, a secondary cooling system and a heat exchanger. The purpose of this study is to obtain new empirical correlations equations of the vertical rectangular sub-channel, especially for the natural convection heat transfer within a bundle of vertical cylinders rectangular arrangement sub-channels. The empirical correlation equation can support the thermo-hydraulic analysis of research nuclear reactors that utilize cylindrical fuel rods, and also can be used in designing of baffle-free vertical shell and tube heat exchangers. The results of this study that the empirical correlation equations of natural convection heat transfer coefficients with rectangular arrangement is Nu = 6.3357 (Ra.Dh/x)0.0740.

  6. The use of mechanistic descriptions of algal growth and zooplankton grazing in an estuarine eutrophication model

    NASA Astrophysics Data System (ADS)

    Baird, M. E.; Walker, S. J.; Wallace, B. B.; Webster, I. T.; Parslow, J. S.

    2003-03-01

    A simple model of estuarine eutrophication is built on biomechanical (or mechanistic) descriptions of a number of the key ecological processes in estuaries. Mechanistically described processes include the nutrient uptake and light capture of planktonic and benthic autotrophs, and the encounter rates of planktonic predators and prey. Other more complex processes, such as sediment biogeochemistry, detrital processes and phosphate dynamics, are modelled using empirical descriptions from the Port Phillip Bay Environmental Study (PPBES) ecological model. A comparison is made between the mechanistically determined rates of ecological processes and the analogous empirically determined rates in the PPBES ecological model. The rates generally agree, with a few significant exceptions. Model simulations were run at a range of estuarine depths and nutrient loads, with outputs presented as the annually averaged biomass of autotrophs. The simulations followed a simple conceptual model of eutrophication, suggesting a simple biomechanical understanding of estuarine processes can provide a predictive tool for ecological processes in a wide range of estuarine ecosystems.

  7. An Empirical Bayes Approach to Spatial Analysis

    NASA Technical Reports Server (NTRS)

    Morris, C. N.; Kostal, H.

    1983-01-01

    Multi-channel LANDSAT data are collected in several passes over agricultural areas during the growing season. How empirical Bayes modeling can be used to develop crop identification and discrimination techniques that account for spatial correlation in such data is considered. The approach models the unobservable parameters and the data separately, hoping to take advantage of the fact that the bulk of spatial correlation lies in the parameter process. The problem is then framed in terms of estimating posterior probabilities of crop types for each spatial area. Some empirical Bayes spatial estimation methods are used to estimate the logits of these probabilities.

  8. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  9. Molecular-dynamics simulation of mutual diffusion in nonideal liquid mixtures

    NASA Astrophysics Data System (ADS)

    Rowley, R. L.; Stoker, J. M.; Giles, N. F.

    1991-05-01

    The mutual-diffusion coefficients, D 12, of n-hexane, n-heptane, and n-octane in chloroform were modeled using equilibrium molecular-dynamics (MD) simulations of simple Lennard-Jones (LJ) fluids. Pure-component LJ parameters were obtained by comparison of simulations to experimental self-diffusion coefficients. While values of “effective” LJ parameters are not expected to simulate accurately diverse thermophysical properties over a wide range of conditions, it was recently shown that effective parameters obtained from pure self-diffusion coefficients can accurately model mutual diffusion in ideal, liquid mixtures. In this work, similar simulations are used to model diffusion in nonideal mixtures. The same combining rules used in the previous study for the cross-interaction parameters were found to be adequate to represent the composition dependence of D 12. The effect of alkane chain length on D 12 is also correctly predicted by the simulations. A commonly used assumption in empirical correlations of D 12, that its kinetic portion is a simple, compositional average of the intradiffusion coefficients, is inconsistent with the simulation results. In fact, the value of the kinetic portion of D 12 was often outside the range of values bracketed by the two intradiffusion coefficients for the nonideal system modeled here.

  10. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  11. Noncontact methods for measuring water-surface elevations and velocities in rivers: Implications for depth and discharge extraction

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark

    2016-01-01

    Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.

  12. A simple analytical method for determining the atmospheric dispersion of upward-directed high velocity releases

    NASA Astrophysics Data System (ADS)

    Palazzi, E.

    The evaluation of atmospheric dispersion of a cloud, arising from a sudden release of flammable or toxic materials, is an essential tool for properly designing flares, vents and other safety devices and to quantify the potential risk related to the existing ones or arising from the various kinds of accidents which can occur in chemical plants. Among the methods developed to treat the important case of upward-directed jets, Hoehne's procedure for determining the behaviour and extent of flammability zone is extensively utilized, particularly concerning petrochemical plants. In a previous study, a substantial simplification of the aforesaid procedure was achieved, by correlating the experimental data with an empirical formula, allowing to obtain a mathematical description of the boundaries of the flammable cloud. Following a theoretical approach, a most general model is developed in the present work, applicable to the various kinds of design problems and/or risk evaluation regarding upward-directed releases from high velocity sources. It is also demonstrated that the model gives conservative results, if applied outside the range of the Hoehne's experimental conditions. Moreover, with simple modifications, the same approach could be easily applied to deal with the atmospheric dispersion of anyhow directed releases.

  13. Empirical correlations for axial dispersion coefficient and Peclet number in fixed-bed columns.

    PubMed

    Rastegar, Seyed Omid; Gu, Tingyue

    2017-03-24

    In this work, a new correlation for the axial dispersion coefficient was obtained using experimental data in the literature for axial dispersion in fixed-bed columns packed with particles. The Chung and Wen correlation, the De Ligny correlation are two popular empirical correlations. However, the former lacks the molecular diffusion term and the latter does not consider bed voidage. The new axial dispersion coefficient correlation in this work was based on additional experimental data in the literature by considering both molecular diffusion and bed voidage. It is more comprehensive and accurate. The Peclet number correlation from the new axial dispersion coefficient correlation on the average leads to 12% lower Peclet number values compared to the values from the Chung and Wen correlation, and in many cases much smaller than those from the De Ligny correlation. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Understanding the determinants of volatility clustering in terms of stationary Markovian processes

    NASA Astrophysics Data System (ADS)

    Miccichè, S.

    2016-11-01

    Volatility is a key variable in the modeling of financial markets. The most striking feature of volatility is that it is a long-range correlated stochastic variable, i.e. its autocorrelation function decays like a power-law τ-β for large time lags. In the present work we investigate the determinants of such feature, starting from the empirical observation that the exponent β of a certain stock's volatility is a linear function of the average correlation of such stock's volatility with all other volatilities. We propose a simple approach consisting in diagonalizing the cross-correlation matrix of volatilities and investigating whether or not the diagonalized volatilities still keep some of the original volatility stylized facts. As a result, the diagonalized volatilities result to share with the original volatilities either the power-law decay of the probability density function and the power-law decay of the autocorrelation function. This would indicate that volatility clustering is already present in the diagonalized un-correlated volatilities. We therefore present a parsimonious univariate model based on a non-linear Langevin equation that well reproduces these two stylized facts of volatility. The model helps us in understanding that the main source of volatility clustering, once volatilities have been diagonalized, is that the economic forces driving volatility can be modeled in terms of a Smoluchowski potential with logarithmic tails.

  15. Deviation of Long-Period Tides from Equilibrium: Kinematics and Geostrophy

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Ray, Richard D.

    2003-01-01

    New empirical estimates of the long-period fortnightly (Mf) tide obtained from TOPEX/Poseidon (T/P) altimeter data confirm significant basin-scale deviations from equilibrium. Elevations in the low-latitude Pacific have reduced amplitude and lag those in the Atlantic by 30 deg or more. These interbasin amplitude and phase variations are robust features that are reproduced by numerical solutions of the shallow-water equations, even for a constant-depth ocean with schematic interconnected rectangular basins. A simplified analytical model for cooscillating connected basins also reproduces the principal features observed in the empirical solutions. This simple model is largely kinematic. Zonally averaged elevations within a simple closed basin would be nearly in equilibrium with the gravitational potential, except for a constant offset required to conserve mass. With connected basins these offsets are mostly eliminated by interbasin mass flux. Because of rotation, this flux occurs mostly in a narrow boundary layer across the mouth and at the western edge of each basin, and geostrophic balance in this zone supports small residual offsets (and phase shifts) between basins. The simple model predicts that this effect should decrease roughly linearly with frequency, a result that is confirmed by numerical modeling and empirical T/P estimates of the monthly (Mm) tidal constituent. This model also explains some aspects of the anomalous nonisostatic response of the ocean to atmospheric pressure forcing at periods of around 5 days.

  16. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  17. Constraints on the nuclear equation of state from nuclear masses and radii in a Thomas-Fermi meta-modeling approach

    NASA Astrophysics Data System (ADS)

    Chatterjee, D.; Gulminelli, F.; Raduta, Ad. R.; Margueron, J.

    2017-12-01

    The question of correlations among empirical equation of state (EoS) parameters constrained by nuclear observables is addressed in a Thomas-Fermi meta-modeling approach. A recently proposed meta-modeling for the nuclear EoS in nuclear matter is augmented with a single finite size term to produce a minimal unified EoS functional able to describe the smooth part of the nuclear ground state properties. This meta-model can reproduce the predictions of a large variety of models, and interpolate continuously between them. An analytical approximation to the full Thomas-Fermi integrals is further proposed giving a fully analytical meta-model for nuclear masses. The parameter space is sampled and filtered through the constraint of nuclear mass reproduction with Bayesian statistical tools. We show that this simple analytical meta-modeling has a predictive power on masses, radii, and skins comparable to full Hartree-Fock or extended Thomas-Fermi calculations with realistic energy functionals. The covariance analysis on the posterior distribution shows that no physical correlation is present between the different EoS parameters. Concerning nuclear observables, a strong correlation between the slope of the symmetry energy and the neutron skin is observed, in agreement with previous studies.

  18. Objective Prediction of Hearing Aid Benefit Across Listener Groups Using Machine Learning: Speech Recognition Performance With Binaural Noise-Reduction Algorithms.

    PubMed

    Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger

    2018-01-01

    The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.

  19. Objective Prediction of Hearing Aid Benefit Across Listener Groups Using Machine Learning: Speech Recognition Performance With Binaural Noise-Reduction Algorithms

    PubMed Central

    Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger

    2018-01-01

    The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200

  20. The dynamics of adapting, unregulated populations and a modified fundamental theorem.

    PubMed

    O'Dwyer, James P

    2013-01-06

    A population in a novel environment will accumulate adaptive mutations over time, and the dynamics of this process depend on the underlying fitness landscape: the fitness of and mutational distance between possible genotypes in the population. Despite its fundamental importance for understanding the evolution of a population, inferring this landscape from empirical data has been problematic. We develop a theoretical framework to describe the adaptation of a stochastic, asexual, unregulated, polymorphic population undergoing beneficial, neutral and deleterious mutations on a correlated fitness landscape. We generate quantitative predictions for the change in the mean fitness and within-population variance in fitness over time, and find a simple, analytical relationship between the distribution of fitness effects arising from a single mutation, and the change in mean population fitness over time: a variant of Fisher's 'fundamental theorem' which explicitly depends on the form of the landscape. Our framework can therefore be thought of in three ways: (i) as a set of theoretical predictions for adaptation in an exponentially growing phase, with applications in pathogen populations, tumours or other unregulated populations; (ii) as an analytically tractable problem to potentially guide theoretical analysis of regulated populations; and (iii) as a basis for developing empirical methods to infer general features of a fitness landscape.

  1. Is the Critical Shields Stress for Incipient Sediment Motion Dependent on Bed Slope in Natural Channels? No.

    NASA Astrophysics Data System (ADS)

    Phillips, C. B.; Jerolmack, D. J.

    2017-12-01

    Understanding when coarse sediment begins to move in a river is essential for linking rivers to the evolution of mountainous landscapes. Unfortunately, the threshold of surface particle motion is notoriously difficult to measure in the field. However, recent studies have shown that the threshold of surface motion is empirically correlated with channel slope, a property that is easy to measure and readily available from the literature. These studies have thoroughly examined the mechanistic underpinnings behind the observed correlation and produced suitably complex models. These models are difficult to implement for natural rivers using widely available data, and thus others have treated the empirical regression between slope and the threshold of motion as a predictive model. We note that none of the authors of the original studies exploring this correlation suggested their empirical regressions be used in a predictive fashion, nevertheless these regressions between slope and the threshold of motion have found their way into numerous recent studies engendering potentially spurious conclusions. We demonstrate that there are two significant problems with using these empirical equations for prediction: (1) the empirical regressions are based on a limited sampling of the phase space of bed-load rivers and (2) the empirical measurements of bankfull and critical shear stresses are paired. The upshot of these problems limits the empirical relations predictive capacity to field sites drawn from the same region of the bed-load river phase space and that the paired nature of the data introduces a spurious correlation when considering the ratio of bankfull to critical shear stress. Using a large compilation of bed-load river hydraulic geometry data, we demonstrate that the variation within independently measured values of the threshold of motion changes systematically with bankfull shields stress and not channel slope. Additionally, we highlight using several recent datasets the potential pitfalls that one can encounter when using simplistic empirical regressions to predict the threshold of motion showing that while these concerns could be construed as subtle the resulting implications can be substantial.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Wei; Lei, Wei-Hua; Wang, Ding-Xiong, E-mail: leiwh@hust.edu.cn

    Recently, two empirical correlations related to the minimum variability timescale (MTS) of the light curves are discovered in gamma-ray bursts (GRBs). One is the anti-correlation between MTS and Lorentz factor Γ, and the other is the anti-correlation between the MTS and gamma-ray luminosity L {sub γ}. Both of the two correlations might be used to explore the activity of the central engine of GRBs. In this paper, we try to understand these empirical correlations by combining two popular black hole central engine models (namely, the Blandford and Znajek mechanism (BZ) and the neutrino-dominated accretion flow (NDAF)). By taking the MTSmore » as the timescale of viscous instability of the NDAF, we find that these correlations favor the scenario in which the jet is driven by the BZ mechanism.« less

  3. Digit Reversal in Children's Writing: A Simple Theory and Its Empirical Validation

    ERIC Educational Resources Information Center

    Fischer, Jean-Paul

    2013-01-01

    This article presents a simple theory according to which the left-right reversal of single digits by 5- and 6-year-old children is mainly due to the application of an implicit right-writing or -orienting rule. A number of nontrivial predictions can be drawn from this theory. First, left-oriented digits (1, 2, 3, 7, and 9) will be reversed more…

  4. Evidence for the Early Emergence of the Simple View of Reading in a Transparent Orthography

    ERIC Educational Resources Information Center

    Kendeou, Panayiota; Papadopoulos, Timothy C.; Kotzapoulou, Marianna

    2013-01-01

    The main aim of the present study was to empirically test the emergence of the Simple View of Reading (SVR) in a transparent orthography, and specifically in Greek. To do so, we examined whether the constituent components of the SVR could be identified in young, Greek-speaking children even before the beginning of formal reading instruction. Our…

  5. Model Estimation Using Ridge Regression with the Variance Normalization Criterion. Interim Report No. 2. The Education and Inequality in Canada Project.

    ERIC Educational Resources Information Center

    Lee, Wan-Fung; Bulcock, Jeffrey Wilson

    The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…

  6. Optimum wall impedance for spinning modes: A correlation with mode cut-off ratio

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1978-01-01

    A correlating equation relating the optimum acoustic impedance for the wall lining of a circular duct to the acoustic mode cut-off ratio, is presented. The optimum impedance was correlated with cut-off ratio because the cut-off ratio appears to be the fundamental parameter governing the propagation of sound in the duct. Modes with similar cut-off ratios respond in a similar way to the acoustic liner. The correlation is a semi-empirical expression developed from an empirical modification of an equation originally derived from sound propagation theory in a thin boundary layer. This correlating equation represents a part of a simplified liner design method, based upon modal cut-off ratio, for multimodal noise propagation.

  7. Random matrix theory analysis of cross-correlations in the US stock market: Evidence from Pearson’s correlation coefficient and detrended cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan

    2013-09-01

    In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.

  8. The diskmass survey. VIII. On the relationship between disk stability and star formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westfall, Kyle B.; Verheijen, Marc A. W.; Andersen, David R.

    2014-04-10

    We study the relationship between the stability level of late-type galaxy disks and their star-formation activity using integral-field gaseous and stellar kinematic data. Specifically, we compare the two-component (gas+stars) stability parameter from Romeo and Wiegert (Q {sub RW}), incorporating stellar kinematic data for the first time, and the star-formation rate estimated from 21 cm continuum emission. We determine the stability level of each disk probabilistically using a Bayesian analysis of our data and a simple dynamical model. Our method incorporates the shape of the stellar velocity ellipsoid (SVE) and yields robust SVE measurements for over 90% of our sample. Averagingmore » over this subsample, we find a meridional shape of σ{sub z}/σ{sub R}=0.51{sub −0.25}{sup +0.36} for the SVE and, at 1.5 disk scale lengths, a stability parameter of Q {sub RW} = 2.0 ± 0.9. We also find that the disk-averaged star-formation-rate surface density ( Σ-dot {sub e,∗}) is correlated with the disk-averaged gas and stellar mass surface densities (Σ {sub e,} {sub g} and Σ {sub e,} {sub *}) and anti-correlated with Q {sub RW}. We show that an anti-correlation between Σ-dot {sub e,∗} and Q {sub RW} can be predicted using empirical scaling relations, such that this outcome is consistent with well-established statistical properties of star-forming galaxies. Interestingly, Σ-dot {sub e,∗} is not correlated with the gas-only or star-only Toomre parameters, demonstrating the merit of calculating a multi-component stability parameter when comparing to star-formation activity. Finally, our results are consistent with the Ostriker et al. model of self-regulated star-formation, which predicts Σ-dot {sub e,∗}/Σ{sub e,g}∝Σ{sub e,∗}{sup 1/2}. Based on this and other theoretical expectations, we discuss the possibility of a physical link between disk stability level and star-formation rate in light of our empirical results.« less

  9. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  10. In-pile measurement of the thermal conductivity of irradiated metallic fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, T.H.; Holland, J.W.

    Transient test data and posttest measurements from recent in-pile overpower transient experiments are used for an in situ determination of metallic fuel thermal conductivity. For test pins that undergo melting but remain intact, a technique is described that relates fuel thermal conductivity to peak pin power during the transient and a posttest measured melt radius. Conductivity estimates and their uncertainty are made for a database of four irradiated Integral Fast Reactor-type metal fuel pins of relatively low burnup (<3 at.%). In the assessment of results, averages and trends of measured fuel thermal conductivity are correlated to local burnup. Emphasis ismore » placed on the changes of conductivity that take place with burnup-induced swelling and sodium logging. Measurements are used to validate simple empirically based analytical models that describe thermal conductivity of porous media and that are recommended for general thermal analyses of irradiated metallic fuel.« less

  11. Space debris characterization in support of a satellite breakup model

    NASA Technical Reports Server (NTRS)

    Fortson, Bryan H.; Winter, James E.; Allahdadi, Firooz A.

    1992-01-01

    The Space Kinetic Impact and Debris Branch began an ambitious program to construct a fully analytical model of the breakup of a satellite under hypervelocity impact. In order to provide empirical data with which to substantiate the model, debris from hypervelocity experiments conducted in a controlled laboratory environment were characterized to provide information of its mass, velocity, and ballistic coefficient distributions. Data on the debris were collected in one master data file, and a simple FORTRAN program allows users to describe the debris from any subset of these experiments that may be of interest to them. A statistical analysis was performed, allowing users to determine the precision of the velocity measurements for the data. Attempts are being made to include and correlate other laboratory data, as well as those data obtained from the explosion or collision of spacecraft in low earth orbit.

  12. The Logic of Fashion Cycles

    PubMed Central

    Acerbi, Alberto; Ghirlanda, Stefano; Enquist, Magnus

    2012-01-01

    Many cultural traits exhibit volatile dynamics, commonly dubbed fashions or fads. Here we show that realistic fashion-like dynamics emerge spontaneously if individuals can copy others' preferences for cultural traits as well as traits themselves. We demonstrate this dynamics in simple mathematical models of the diffusion, and subsequent abandonment, of a single cultural trait which individuals may or may not prefer. We then simulate the coevolution between many cultural traits and the associated preferences, reproducing power-law frequency distributions of cultural traits (most traits are adopted by few individuals for a short time, and very few by many for a long time), as well as correlations between the rate of increase and the rate of decrease of traits (traits that increase rapidly in popularity are also abandoned quickly and vice versa). We also establish that alternative theories, that fashions result from individuals signaling their social status, or from individuals randomly copying each other, do not satisfactorily reproduce these empirical observations. PMID:22412887

  13. The logic of fashion cycles.

    PubMed

    Acerbi, Alberto; Ghirlanda, Stefano; Enquist, Magnus

    2012-01-01

    Many cultural traits exhibit volatile dynamics, commonly dubbed fashions or fads. Here we show that realistic fashion-like dynamics emerge spontaneously if individuals can copy others' preferences for cultural traits as well as traits themselves. We demonstrate this dynamics in simple mathematical models of the diffusion, and subsequent abandonment, of a single cultural trait which individuals may or may not prefer. We then simulate the coevolution between many cultural traits and the associated preferences, reproducing power-law frequency distributions of cultural traits (most traits are adopted by few individuals for a short time, and very few by many for a long time), as well as correlations between the rate of increase and the rate of decrease of traits (traits that increase rapidly in popularity are also abandoned quickly and vice versa). We also establish that alternative theories, that fashions result from individuals signaling their social status, or from individuals randomly copying each other, do not satisfactorily reproduce these empirical observations.

  14. Accurate van der Waals coefficients from density functional theory

    PubMed Central

    Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn

    2012-01-01

    The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765

  15. Pyro-Synthesis of Functional Nanocrystals

    PubMed Central

    Gim, Jihyeon; Mathew, Vinod; Lim, Jinsub; Song, Jinju; Baek, Sora; Kang, Jungwon; Ahn, Docheon; Song, Sun-Ju; Yoon, Hyeonseok; Kim, Jaekook

    2012-01-01

    Despite nanomaterials with unique properties playing a vital role in scientific and technological advancements of various fields including chemical and electrochemical applications, the scope for exploration of nano-scale applications is still wide open. The intimate correlation between material properties and synthesis in combination with the urgency to enhance the empirical understanding of nanomaterials demand the evolution of new strategies to promising materials. Herein we introduce a rapid pyro-synthesis that produces highly crystalline functional nanomaterials under reaction times of a few seconds in open-air conditions. The versatile technique may facilitate the development of a variety of nanomaterials and, in particular, carbon-coated metal phosphates with appreciable physico-chemical properties benefiting energy storage applications. The present strategy may present opportunities to develop “design rules” not only to produce nanomaterials for various applications but also to realize cost-effective and simple nanomaterial production beyond lab-scale limitations. PMID:23230511

  16. Influence of the impact energy on the pattern of blood drip stains

    NASA Astrophysics Data System (ADS)

    Smith, F. R.; Nicloux, C.; Brutin, D.

    2018-01-01

    The maximum spreading diameter of complex fluid droplets has been extensively studied and explained by numerous physical models. This research focuses therefore on a different aspect, the bulging outer rim observed after evaporation on the final dried pattern of blood droplets. A correlation is found between the inner diameter, the maximum outer diameter, and the impact speed. This shows how the drying mechanism of a blood drip stain is influenced by the impact energy, which induces a larger spreading diameter and thus a different redistribution of red blood cells inside the droplet. An empirical relation is established between the final dried pattern of a passive bloodstain and its impact speed, yielding a possible forensic application. Indeed, being able to relate accurately the energy of the drop with its final pattern would give a clue to investigators, as currently no such simple and accurate tool exists.

  17. Spin Entanglement Witness for Quantum Gravity.

    PubMed

    Bose, Sougato; Mazumdar, Anupam; Morley, Gavin W; Ulbricht, Hendrik; Toroš, Marko; Paternostro, Mauro; Geraci, Andrew A; Barker, Peter F; Kim, M S; Milburn, Gerard

    2017-12-15

    Understanding gravity in the framework of quantum mechanics is one of the great challenges in modern physics. However, the lack of empirical evidence has lead to a debate on whether gravity is a quantum entity. Despite varied proposed probes for quantum gravity, it is fair to say that there are no feasible ideas yet to test its quantum coherent behavior directly in a laboratory experiment. Here, we introduce an idea for such a test based on the principle that two objects cannot be entangled without a quantum mediator. We show that despite the weakness of gravity, the phase evolution induced by the gravitational interaction of two micron size test masses in adjacent matter-wave interferometers can detectably entangle them even when they are placed far apart enough to keep Casimir-Polder forces at bay. We provide a prescription for witnessing this entanglement, which certifies gravity as a quantum coherent mediator, through simple spin correlation measurements.

  18. Pyro-synthesis of functional nanocrystals.

    PubMed

    Gim, Jihyeon; Mathew, Vinod; Lim, Jinsub; Song, Jinju; Baek, Sora; Kang, Jungwon; Ahn, Docheon; Song, Sun-Ju; Yoon, Hyeonseok; Kim, Jaekook

    2012-01-01

    Despite nanomaterials with unique properties playing a vital role in scientific and technological advancements of various fields including chemical and electrochemical applications, the scope for exploration of nano-scale applications is still wide open. The intimate correlation between material properties and synthesis in combination with the urgency to enhance the empirical understanding of nanomaterials demand the evolution of new strategies to promising materials. Herein we introduce a rapid pyro-synthesis that produces highly crystalline functional nanomaterials under reaction times of a few seconds in open-air conditions. The versatile technique may facilitate the development of a variety of nanomaterials and, in particular, carbon-coated metal phosphates with appreciable physico-chemical properties benefiting energy storage applications. The present strategy may present opportunities to develop "design rules" not only to produce nanomaterials for various applications but also to realize cost-effective and simple nanomaterial production beyond lab-scale limitations.

  19. Spin Entanglement Witness for Quantum Gravity

    NASA Astrophysics Data System (ADS)

    Bose, Sougato; Mazumdar, Anupam; Morley, Gavin W.; Ulbricht, Hendrik; Toroš, Marko; Paternostro, Mauro; Geraci, Andrew A.; Barker, Peter F.; Kim, M. S.; Milburn, Gerard

    2017-12-01

    Understanding gravity in the framework of quantum mechanics is one of the great challenges in modern physics. However, the lack of empirical evidence has lead to a debate on whether gravity is a quantum entity. Despite varied proposed probes for quantum gravity, it is fair to say that there are no feasible ideas yet to test its quantum coherent behavior directly in a laboratory experiment. Here, we introduce an idea for such a test based on the principle that two objects cannot be entangled without a quantum mediator. We show that despite the weakness of gravity, the phase evolution induced by the gravitational interaction of two micron size test masses in adjacent matter-wave interferometers can detectably entangle them even when they are placed far apart enough to keep Casimir-Polder forces at bay. We provide a prescription for witnessing this entanglement, which certifies gravity as a quantum coherent mediator, through simple spin correlation measurements.

  20. Why Psychology Cannot be an Empirical Science.

    PubMed

    Smedslund, Jan

    2016-06-01

    The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.

  1. An empirical, graphical, and analytical study of the relationship between vegetation indices. [derived from LANDSAT data

    NASA Technical Reports Server (NTRS)

    Lautenschlager, L.; Perry, C. R., Jr. (Principal Investigator)

    1981-01-01

    The development of formulae for the reduction of multispectral scanner measurements to a single value (vegetation index) for predicting and assessing vegetative characteristics is addressed. The origin, motivation, and derivation of some four dozen vegetation indices are summarized. Empirical, graphical, and analytical techniques are used to investigate the relationships among the various indices. It is concluded that many vegetative indices are very similar, some being simple algebraic transforms of others.

  2. Dependency structure and scaling properties of financial time series are related

    PubMed Central

    Morales, Raffaello; Di Matteo, T.; Aste, Tomaso

    2014-01-01

    We report evidence of a deep interplay between cross-correlations hierarchical properties and multifractality of New York Stock Exchange daily stock returns. The degree of multifractality displayed by different stocks is found to be positively correlated to their depth in the hierarchy of cross-correlations. We propose a dynamical model that reproduces this observation along with an array of other empirical properties. The structure of this model is such that the hierarchical structure of heterogeneous risks plays a crucial role in the time evolution of the correlation matrix, providing an interpretation to the mechanism behind the interplay between cross-correlation and multifractality in financial markets, where the degree of multifractality of stocks is associated to their hierarchical positioning in the cross-correlation structure. Empirical observations reported in this paper present a new perspective towards the merging of univariate multi scaling and multivariate cross-correlation properties of financial time series. PMID:24699417

  3. Dependency structure and scaling properties of financial time series are related

    NASA Astrophysics Data System (ADS)

    Morales, Raffaello; Di Matteo, T.; Aste, Tomaso

    2014-04-01

    We report evidence of a deep interplay between cross-correlations hierarchical properties and multifractality of New York Stock Exchange daily stock returns. The degree of multifractality displayed by different stocks is found to be positively correlated to their depth in the hierarchy of cross-correlations. We propose a dynamical model that reproduces this observation along with an array of other empirical properties. The structure of this model is such that the hierarchical structure of heterogeneous risks plays a crucial role in the time evolution of the correlation matrix, providing an interpretation to the mechanism behind the interplay between cross-correlation and multifractality in financial markets, where the degree of multifractality of stocks is associated to their hierarchical positioning in the cross-correlation structure. Empirical observations reported in this paper present a new perspective towards the merging of univariate multi scaling and multivariate cross-correlation properties of financial time series.

  4. A structural-phenomenological typology of mind-matter correlations.

    PubMed

    Atmanspacher, Harald; Fach, Wolfgang

    2013-04-01

    We present a typology of mind-matter correlations embedded in a dual-aspect monist framework as proposed by Pauli and Jung. They conjectured a picture in which the mental and the material arise as two complementary aspects of one underlying psychophysically neutral reality to which they cannot be reduced and to which direct empirical access is impossible. This picture suggests structural, persistent, reproducible mind-matter correlations by splitting the underlying reality into aspects. In addition, it suggests induced, occasional, evasive mind-matter correlations above and below, respectively, those stable baseline correlations. Two significant roles for the concept of meaning in this framework are elucidated. Finally, it is shown that the obtained typology is in perfect agreement with an empirically based classification of the phenomenology of mind-matter correlations as observed in exceptional human experiences. © 2013, The Society of Analytical Psychology.

  5. Model of Pressure Distribution in Vortex Flow Controls

    NASA Astrophysics Data System (ADS)

    Mielczarek, Szymon; Sawicki, Jerzy M.

    2015-06-01

    Vortex valves belong to the category of hydrodynamic flow controls. They are important and theoretically interesting devices, so complex from hydraulic point of view, that probably for this reason none rational concept of their operation has been proposed so far. In consequence, functioning of vortex valves is described by CFD-methods (computer-aided simulation of technical objects) or by means of simple empirical relations (using discharge coefficient or hydraulic loss coefficient). Such rational model of the considered device is proposed in the paper. It has a simple algebraic form, but is well grounded physically. The basic quantitative relationship, which describes the valve operation, i.e. dependence between the flow discharge and the circumferential pressure head, caused by the rotation, has been verified empirically. Conformity between calculated and measured parameters of the device allows for acceptation of the proposed concept.

  6. Statistical Tests Black swans or dragon-kings? A simple test for deviations from the power law★

    NASA Astrophysics Data System (ADS)

    Janczura, J.; Weron, R.

    2012-05-01

    We develop a simple test for deviations from power law tails. Actually, from the tails of any distribution. We use this test - which is based on the asymptotic properties of the empirical distribution function - to answer the question whether great natural disasters, financial crashes or electricity price spikes should be classified as dragon-kings or `only' as black swans.

  7. The Role of Word Recognition, Oral Reading Fluency and Listening Comprehension in the Simple View of Reading: A Study in an Intermediate Depth Orthography

    ERIC Educational Resources Information Center

    Cadime, Irene; Rodrigues, Bruna; Santos, Sandra; Viana, Fernanda Leopoldina; Chaves-Sousa, Séli; do Céu Cosme, Maria; Ribeiro, Iolanda

    2017-01-01

    Empirical research has provided evidence for the simple view of reading across a variety of orthographies, but the role of oral reading fluency in the model is unclear. Moreover, the relative weight of listening comprehension, oral reading fluency and word recognition in reading comprehension seems to vary across orthographies and schooling years.…

  8. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  9. Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data

    PubMed Central

    Hu, Jianhua; Wang, Peng; Qu, Annie

    2014-01-01

    Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433

  10. Statistical microeconomics and commodity prices: theory and empirical results.

    PubMed

    Baaquie, Belal E

    2016-01-13

    A review is made of the statistical generalization of microeconomics by Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is given by the unequal time correlation function and is modelled by the Feynman path integral based on an action functional. The correlation functions of the model are defined using the path integral. The existence of the action functional for commodity prices that was postulated to exist in Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)) has been empirically ascertained in Baaquie et al. (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). The model's action functionals for different commodities has been empirically determined and calibrated using the unequal time correlation functions of the market commodity prices using a perturbation expansion (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). Nine commodities drawn from the energy, metal and grain sectors are empirically studied and their auto-correlation for up to 300 days is described by the model to an accuracy of R(2)>0.90-using only six parameters. © 2015 The Author(s).

  11. Improved Design of Tunnel Supports : Executive Summary

    DOT National Transportation Integrated Search

    1979-12-01

    This report focuses on improvement of design methodologies related to the ground-structure interaction in tunneling. The design methods range from simple analytical and empirical methods to sophisticated finite element techniques as well as an evalua...

  12. Variety and volatility in financial markets

    NASA Astrophysics Data System (ADS)

    Lillo, Fabrizio; Mantegna, Rosario N.

    2000-11-01

    We study the price dynamics of stocks traded in a financial market by considering the statistical properties of both a single time series and an ensemble of stocks traded simultaneously. We use the n stocks traded on the New York Stock Exchange to form a statistical ensemble of daily stock returns. For each trading day of our database, we study the ensemble return distribution. We find that a typical ensemble return distribution exists in most of the trading days with the exception of crash and rally days and of the days following these extreme events. We analyze each ensemble return distribution by extracting its first two central moments. We observe that these moments fluctuate in time and are stochastic processes, themselves. We characterize the statistical properties of ensemble return distribution central moments by investigating their probability density functions and temporal correlation properties. In general, time-averaged and portfolio-averaged price returns have different statistical properties. We infer from these differences information about the relative strength of correlation between stocks and between different trading days. Last, we compare our empirical results with those predicted by the single-index model and we conclude that this simple model cannot explain the statistical properties of the second moment of the ensemble return distribution.

  13. TS-Chemscore, a Target-Specific Scoring Function, Significantly Improves the Performance of Scoring in Virtual Screening.

    PubMed

    Wang, Wen-Jing; Huang, Qi; Zou, Jun; Li, Lin-Li; Yang, Sheng-Yong

    2015-07-01

    Most of the scoring functions currently used in structure-based drug design belong to 'universal' scoring functions, which often give a poor correlation between the calculated scores and experimental binding affinities. In this investigation, we proposed a simple strategy to construct target-specific scoring functions based on known 'universal' scoring functions. This strategy was applied to Chemscore, a widely used empirical scoring function, which led to a new scoring function, termed TS-Chemscore. TS-Chemscore was validated on 14 protein targets, which cover a wide range of biological target categories. The results showed that TS-Chemscore significantly improved the correlation between the calculated scores and experimental binding affinities compared with the original Chemscore. TS-Chemscore was then applied in virtual screening to retrieve novel JAK3 and YopH inhibitors. Top 30 compounds for each target were selected for experimental validation. Six active compounds for JAK3 and four for YopH were obtained. These compounds were out of the lists of top 30 compounds sorted by Chemscore. Collectively, TS-Chemscore established in this study showed a better performance in virtual screening than its counterpart Chemscore. © 2014 John Wiley & Sons A/S.

  14. Linear free energy relationships of the 1H and 13C NMR chemical shifts of 3-methylene-2-substituted-1,4-pentadienes

    NASA Astrophysics Data System (ADS)

    Valentić, Nataša V.; Vitnik, Željko; Kozhushkov, Sergei I.; de Meijere, Armin; Ušćumlić, Gordana S.; Juranić, Ivan O.

    2005-06-01

    Linear free energy relationships (LFER) were applied to the 1H and 13C NMR chemical shifts ( δN, N= 1H and 13C, respectively) in the unsaturated backbone of cross-conjugated trienes 3-methylene-2-substituted-1,4-pentadienes. The NMR data were correlated using five different LFER models, based on the mono, the dual and the triple substituent parameter (MSP, DSP and TSP, respectively) treatment. The simple and extended Hammett equations, and the three postulated unconventional LFER models obtained by adaptation of the later, were used. The geometry data, which are needed in Karplus-type and McConnell-type analysis, were obtained using semi-empirical MNDO-PM3 calculations. In correlating the data the TSP approach was more successful than the MSP and DSP approaches. The fact that the calculated molecular geometries allow accurate prediction of the NMR data confirms the validity of unconventional LFER models used. These results suggest the s- cis conformation of the cross-conjugated triene as the preferred one. Postulated unconventional DSP and TSP equations enable the assessment of electronic substituent effects in the presence of other interfering influences.

  15. Airspace Dimension Assessment with nanoparticles reflects lung density as quantified by MRI

    PubMed Central

    Jakobsson, Jonas K; Löndahl, Jakob; Olsson, Lars E; Diaz, Sandra; Zackrisson, Sophia; Wollmer, Per

    2018-01-01

    Background Airspace Dimension Assessment with inhaled nanoparticles is a novel method to determine distal airway morphology. This is the first empirical study using Airspace Dimension Assessment with nanoparticles (AiDA) to estimate distal airspace radius. The technology is relatively simple and potentially accessible in clinical outpatient settings. Method Nineteen never-smoking volunteers performed nanoparticle inhalation tests at multiple breath-hold times, and the difference in nanoparticle concentration of inhaled and exhaled gas was measured. An exponential decay curve was fitted to the concentration of recovered nanoparticles, and airspace dimensions were assessed from the half-life of the decay. Pulmonary tissue density was measured using magnetic resonance imaging (MRI). Results The distal airspace radius measured by AiDA correlated with lung tissue density as measured by MRI (ρ = −0.584; p = 0.0086). The linear intercept of the logarithm of the exponential decay curve correlated with forced expiratory volume in one second (FEV1) (ρ = 0.549; p = 0.0149). Conclusion The AiDA method shows potential to be developed into a tool to assess conditions involving changes in distal airways, eg, emphysema. The intercept may reflect airway properties; this finding should be further investigated.

  16. An empirical relationship for homogenization in single-phase binary alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Stein, B. A.

    1979-01-01

    A semiempirical formula is developed for describing the extent of interaction between constituents in single-phase binary alloy systems with planar, cylindrical, or spherical interfaces. The formula contains two parameters that are functions of mean concentration and interface geometry of the couple. The empirical solution is simple, easy to use, and does not involve sequential calculations, thereby allowing quick estimation of the extent of interactions without lengthy calculations. Results obtained with this formula are in good agreement with those from a finite-difference analysis.

  17. Malthusian dynamics in a diverging Europe: Northern Italy, 1650-1881.

    PubMed

    Fernihough, Alan

    2013-02-01

    Recent empirical research questions the validity of using Malthusian theory in preindustrial England. Using real wage and vital rate data for the years 1650-1881, I provide empirical estimates for a different region: Northern Italy. The empirical methodology is theoretically underpinned by a simple Malthusian model, in which population, real wages, and vital rates are determined endogenously. My findings strongly support the existence of a Malthusian economy wherein population growth decreased living standards, which in turn influenced vital rates. However, these results also demonstrate how the system is best characterized as one of weak homeostasis. Furthermore, there is no evidence of Boserupian effects given that increases in population failed to spur any sustained technological progress.

  18. An evaluation of rise time characterization and prediction methods

    NASA Technical Reports Server (NTRS)

    Robinson, Leick D.

    1994-01-01

    One common method of extrapolating sonic boom waveforms from aircraft to ground is to calculate the nonlinear distortion, and then add a rise time to each shock by a simple empirical rule. One common rule is the '3 over P' rule which calculates the rise time in milliseconds as three divided by the shock amplitude in psf. This rule was compared with the results of ZEPHYRUS, a comprehensive algorithm which calculates sonic boom propagation and extrapolation with the combined effects of nonlinearity, attenuation, dispersion, geometric spreading, and refraction in a stratified atmosphere. It is shown there that the simple empirical rule considerably overestimates the rise time estimate. In addition, the empirical rule does not account for variations in the rise time due to humidity variation or propagation history. It is also demonstrated that the rise time is only an approximate indicator of perceived loudness. Three waveforms with identical characteristics (shock placement, amplitude, and rise time), but with different shock shapes, are shown to give different calculated loudness. This paper is based in part on work performed at the Applied Research Laboratories, the University of Texas at Austin, and supported by NASA Langley.

  19. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  20. Estimating trends in the global mean temperature record

    NASA Astrophysics Data System (ADS)

    Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.

    2017-06-01

    Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the important characteristics of internal variability, can result in more accurate uncertainty statements about trends.

  1. Estimation of the simple correlation coefficient.

    PubMed

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  2. Semi-empirical formulation of multiple scattering for the Gaussian beam model of heavy charged particles stopping in tissue-like matter.

    PubMed

    Kanematsu, Nobuyuki

    2009-03-07

    Dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue. This work provides a simple model for such demanding applications. There is an approximate linearity between RMS end-point displacement and range of incident particles in water, empirically found in measurements and detailed calculations. This fact was translated into a simple linear formula, from which the scattering power that is only inversely proportional to the residual range was derived. The simplicity enabled the analytical formulation for ions stopping in water, which was designed to be equivalent with the extended Highland model and agreed with measurements within 2% or 0.02 cm in RMS displacement. The simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity.

  3. Atheists and Agnostics Are More Reflective than Religious Believers: Four Empirical Studies and a Meta-Analysis.

    PubMed

    Pennycook, Gordon; Ross, Robert M; Koehler, Derek J; Fugelsang, Jonathan A

    2016-01-01

    Individual differences in the mere willingness to think analytically has been shown to predict religious disbelief. Recently, however, it has been argued that analytic thinkers are not actually less religious; rather, the putative association may be a result of religiosity typically being measured after analytic thinking (an order effect). In light of this possibility, we report four studies in which a negative correlation between religious belief and performance on analytic thinking measures is found when religious belief is measured in a separate session. We also performed a meta-analysis on all previously published studies on the topic along with our four new studies (N = 15,078, k = 31), focusing specifically on the association between performance on the Cognitive Reflection Test (the most widely used individual difference measure of analytic thinking) and religious belief. This meta-analysis revealed an overall negative correlation (r) of -.18, 95% CI [-.21, -.16]. Although this correlation is modest, self-identified atheists (N = 133) scored 18.7% higher than religiously affiliated individuals (N = 597) on a composite measure of analytic thinking administered across our four new studies (d = .72). Our results indicate that the association between analytic thinking and religious disbelief is not caused by a simple order effect. There is good evidence that atheists and agnostics are more reflective than religious believers.

  4. Atheists and Agnostics Are More Reflective than Religious Believers: Four Empirical Studies and a Meta-Analysis

    PubMed Central

    Pennycook, Gordon; Ross, Robert M.; Koehler, Derek J.; Fugelsang, Jonathan A.

    2016-01-01

    Individual differences in the mere willingness to think analytically has been shown to predict religious disbelief. Recently, however, it has been argued that analytic thinkers are not actually less religious; rather, the putative association may be a result of religiosity typically being measured after analytic thinking (an order effect). In light of this possibility, we report four studies in which a negative correlation between religious belief and performance on analytic thinking measures is found when religious belief is measured in a separate session. We also performed a meta-analysis on all previously published studies on the topic along with our four new studies (N = 15,078, k = 31), focusing specifically on the association between performance on the Cognitive Reflection Test (the most widely used individual difference measure of analytic thinking) and religious belief. This meta-analysis revealed an overall negative correlation (r) of -.18, 95% CI [-.21, -.16]. Although this correlation is modest, self-identified atheists (N = 133) scored 18.7% higher than religiously affiliated individuals (N = 597) on a composite measure of analytic thinking administered across our four new studies (d = .72). Our results indicate that the association between analytic thinking and religious disbelief is not caused by a simple order effect. There is good evidence that atheists and agnostics are more reflective than religious believers. PMID:27054566

  5. Experimental investigation of liquid-liquid system drop size distribution in Taylor-Couette flow and its application in the CFD simulation

    NASA Astrophysics Data System (ADS)

    Farzad, Reza; Puttinger, Stefan; Pirker, Stefan; Schneiderbauer, Simon

    Liquid-liquid systems are widely used in the several industries such as food, pharmaceutical, cosmetic, chemical and petroleum. Drop size distribution (DSD) plays a key role as it strongly affects the overall mass and heat transfer in the liquid-liquid systems. To understand the underlying mechanisms single drop breakup experiments have been done by several researchers in the Taylor-Couette flow; however, most of those studies concentrate on the laminar flow regime and therefore, there is no sufficient amount of data in the case of in turbulent flows. The well-defined pattern of the Taylor-Couette flow enables the possibility to investigate DSD as a function of the local fluid dynamic properties, such as shear rate, which is in contrast to more complex devices such as stirred tank reactors. This paper deals with the experimental investigation of liquid-liquid DSD in Taylor-Couette flow. From high speed camera images we found a simple correlation for the Sauter mean diameter as a function of the local shear employing image processing. It is shown that this correlation holds for different oil-in-water emulsions. Finally, this empirical correlation for the DSD is used as an input data for a CFD simulation to compute the local breakup of individual droplets in a stirred tank reactor.

  6. Two Simple Rules for Improving the Accuracy of Empiric Treatment of Multidrug-Resistant Urinary Tract Infections.

    PubMed

    Linsenmeyer, Katherine; Strymish, Judith; Gupta, Kalpana

    2015-12-01

    The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P < 0.001). Genitourinary tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  7. Empirical correlations of the performance of vapor-anode PX-series AMTEC cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, L.; Merrill, J.M.; Mayberry, C.

    Power systems based on AMTEC technology will be used for future NASA missions, including a Pluto-Express (PX) or Europa mission planned for approximately year 2004. AMTEC technology may also be used as an alternative to photovoltaic based power systems for future Air Force missions. An extensive development program of Alkali-Metal Thermal-to-Electric Conversion (AMTEC) technology has been underway at the Vehicle Technologies Branch of the Air Force Research Laboratory (AFRL) in Albuquerque, New Mexico since 1992. Under this program, numerical modeling and experimental investigations of the performance of the various multi-BASE tube, vapor-anode AMTEC cells have been and are being performed.more » Vacuum testing of AMTEC cells at AFRL determines the effects of changing the hot and cold end temperatures, T{sub hot} and T{sub cold}, and applied external load, R{sub ext}, on the cell electric power output, current-voltage characteristics, and conversion efficiency. Test results have traditionally been used to provide feedback to cell designers, and to validate numerical models. The current work utilizes the test data to develop empirical correlations for cell output performance under various working conditions. Because the empirical correlations are developed directly from the experimental data, uncertainties arising from material properties that must be used in numerical modeling can be avoided. Empirical correlations of recent vapor-anode PX-series AMTEC cells have been developed. Based on AMTEC theory and the experimental data, the cell output power (as well as voltage and current) was correlated as a function of three parameters (T{sub hot}, T{sub cold}, and R{sub ext}) for a given cell. Correlations were developed for different cells (PX-3C, PX-3A, PX-G3, and PX-5A), and were in good agreement with experimental data for these cells. Use of these correlations can greatly reduce the testing required to determine electrical performance of a given type of AMTEC cell over a wide range of operating conditions.« less

  8. Correlated pay-offs are key to cooperation

    PubMed Central

    Frommen, Joachim G.; Riehl, Christina

    2016-01-01

    The general belief that cooperation and altruism in social groups result primarily from kin selection has recently been challenged, not least because results from cooperatively breeding insects and vertebrates have shown that groups may be composed mainly of non-relatives. This allows testing predictions of reciprocity theory without the confounding effect of relatedness. Here, we review complementary and alternative evolutionary mechanisms to kin selection theory and provide empirical examples of cooperative behaviour among unrelated individuals in a wide range of taxa. In particular, we focus on the different forms of reciprocity and on their underlying decision rules, asking about evolutionary stability, the conditions selecting for reciprocity and the factors constraining reciprocal cooperation. We find that neither the cognitive requirements of reciprocal cooperation nor the often sequential nature of interactions are insuperable stumbling blocks for the evolution of reciprocity. We argue that simple decision rules such as ‘help anyone if helped by someone’ should get more attention in future research, because empirical studies show that animals apply such rules, and theoretical models find that they can create stable levels of cooperation under a wide range of conditions. Owing to its simplicity, behaviour based on such a heuristic may in fact be ubiquitous. Finally, we argue that the evolution of exchange and trading of service and commodities among social partners needs greater scientific focus. PMID:26729924

  9. A simple model for calculating air pollution within street canyons

    NASA Astrophysics Data System (ADS)

    Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.

    2014-04-01

    This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.

  10. 40 CFR Appendix C to Part 75 - Missing Data Estimation Procedures

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certification of a parametric, empirical, or process simulation method or model for calculating substitute data... available process simulation methods and models. 1.2Petition Requirements Continuously monitor, determine... desulfurization, a corresponding empirical correlation or process simulation parametric method using appropriate...

  11. Lutetium oxyorthosilicate (LSO) intrinsic activity correction and minimal detectable target activity study for SPECT imaging with a LSO-based animal PET scanner

    NASA Astrophysics Data System (ADS)

    Yao, Rutao; Ma, Tianyu; Shao, Yiping

    2008-08-01

    This work is part of a feasibility study to develop SPECT imaging capability on a lutetium oxyorthosilicate (LSO) based animal PET system. The SPECT acquisition was enabled by inserting a collimator assembly inside the detector ring and acquiring data in singles mode. The same LSO detectors were used for both PET and SPECT imaging. The intrinsic radioactivity of 176Lu in the LSO crystals, however, contaminates the SPECT data, and can generate image artifacts and introduce quantification error. The objectives of this study were to evaluate the effectiveness of a LSO background subtraction method, and to estimate the minimal detectable target activity (MDTA) of image object for SPECT imaging. For LSO background correction, the LSO contribution in an image study was estimated based on a pre-measured long LSO background scan and subtracted prior to the image reconstruction. The MDTA was estimated in two ways. The empirical MDTA (eMDTA) was estimated from screening the tomographic images at different activity levels. The calculated MDTA (cMDTA) was estimated from using a formula based on applying a modified Currie equation on an average projection dataset. Two simulated and two experimental phantoms with different object activity distributions and levels were used in this study. The results showed that LSO background adds concentric ring artifacts to the reconstructed image, and the simple subtraction method can effectively remove these artifacts—the effect of the correction was more visible when the object activity level was near or above the eMDTA. For the four phantoms studied, the cMDTA was consistently about five times of the corresponding eMDTA. In summary, we implemented a simple LSO background subtraction method and demonstrated its effectiveness. The projection-based calculation formula yielded MDTA results that closely correlate with that obtained empirically and may have predicative value for imaging applications.

  12. Phase correlation of foreign exchange time series

    NASA Astrophysics Data System (ADS)

    Wu, Ming-Chya

    2007-03-01

    Correlation of foreign exchange rates in currency markets is investigated based on the empirical data of USD/DEM and USD/JPY exchange rates for a period from February 1 1986 to December 31 1996. The return of exchange time series is first decomposed into a number of intrinsic mode functions (IMFs) by the empirical mode decomposition method. The instantaneous phases of the resultant IMFs calculated by the Hilbert transform are then used to characterize the behaviors of pricing transmissions, and the correlation is probed by measuring the phase differences between two IMFs in the same order. From the distribution of phase differences, our results show explicitly that the correlations are stronger in daily time scale than in longer time scales. The demonstration for the correlations in periods of 1986-1989 and 1990-1993 indicates two exchange rates in the former period were more correlated than in the latter period. The result is consistent with the observations from the cross-correlation calculation.

  13. A Mathematical Model of a Simple Amplifier Using a Ferroelectric Transistor

    NASA Technical Reports Server (NTRS)

    Sayyah, Rana; Hunt, Mitchell; MacLeod, Todd C.; Ho, Fat D.

    2009-01-01

    This paper presents a mathematical model characterizing the behavior of a simple amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the amplifier is the basis of many circuit configurations, a mathematical model that describes the behavior of a FeFET-based amplifier will help in the integration of FeFETs into many other circuits.

  14. Development of apparent viscosity test for hot-poured crack sealants.

    DOT National Transportation Integrated Search

    2008-11-01

    Current crack sealant specifications focuses on utilizing simple empirical tests such as penetration, : resilience, flow, and bonding to cement concrete briquettes (ASTM D3405) to measure the ability of the material : to resist cohesive and adhesion ...

  15. Selecting informative subsets of sparse supermatrices increases the chance to find correct trees.

    PubMed

    Misof, Bernhard; Meyer, Benjamin; von Reumont, Björn Marcus; Kück, Patrick; Misof, Katharina; Meusemann, Karen

    2013-12-03

    Character matrices with extensive missing data are frequently used in phylogenomics with potentially detrimental effects on the accuracy and robustness of tree inference. Therefore, many investigators select taxa and genes with high data coverage. Drawbacks of these selections are their exclusive reliance on data coverage without consideration of actual signal in the data which might, thus, not deliver optimal data matrices in terms of potential phylogenetic signal. In order to circumvent this problem, we have developed a heuristics implemented in a software called mare which (1) assesses information content of genes in supermatrices using a measure of potential signal combined with data coverage and (2) reduces supermatrices with a simple hill climbing procedure to submatrices with high total information content. We conducted simulation studies using matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30%. With matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30% Maximum Likelihood (ML) tree reconstructions failed to recover correct trees. A selection of a data subset with the herein proposed approach increased the chance to recover correct partial trees more than 10-fold. The selection of data subsets with the herein proposed simple hill climbing procedure performed well either considering the information content or just a simple presence/absence information of genes. We also applied our approach on an empirical data set, addressing questions of vertebrate systematics. With this empirical dataset selecting a data subset with high information content and supporting a tree with high average boostrap support was most successful if information content of genes was considered. Our analyses of simulated and empirical data demonstrate that sparse supermatrices can be reduced on a formal basis outperforming the usually used simple selections of taxa and genes with high data coverage.

  16. A Web Application For Visualizing Empirical Models of the Space-Atmosphere Interface Region: AtModWeb

    NASA Astrophysics Data System (ADS)

    Knipp, D.; Kilcommons, L. M.; Damas, M. C.

    2015-12-01

    We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?

  17. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  18. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  19. How children perceive fractals: Hierarchical self-similarity and cognitive development

    PubMed Central

    Martins, Maurício Dias; Laaha, Sabine; Freiberger, Eva Maria; Choi, Soonja; Fitch, W. Tecumseh

    2014-01-01

    The ability to understand and generate hierarchical structures is a crucial component of human cognition, available in language, music, mathematics and problem solving. Recursion is a particularly useful mechanism for generating complex hierarchies by means of self-embedding rules. In the visual domain, fractals are recursive structures in which simple transformation rules generate hierarchies of infinite depth. Research on how children acquire these rules can provide valuable insight into the cognitive requirements and learning constraints of recursion. Here, we used fractals to investigate the acquisition of recursion in the visual domain, and probed for correlations with grammar comprehension and general intelligence. We compared second (n = 26) and fourth graders (n = 26) in their ability to represent two types of rules for generating hierarchical structures: Recursive rules, on the one hand, which generate new hierarchical levels; and iterative rules, on the other hand, which merely insert items within hierarchies without generating new levels. We found that the majority of fourth graders, but not second graders, were able to represent both recursive and iterative rules. This difference was partially accounted by second graders’ impairment in detecting hierarchical mistakes, and correlated with between-grade differences in grammar comprehension tasks. Empirically, recursion and iteration also differed in at least one crucial aspect: While the ability to learn recursive rules seemed to depend on the previous acquisition of simple iterative representations, the opposite was not true, i.e., children were able to acquire iterative rules before they acquired recursive representations. These results suggest that the acquisition of recursion in vision follows learning constraints similar to the acquisition of recursion in language, and that both domains share cognitive resources involved in hierarchical processing. PMID:24955884

  20. Statistical analysis on multifractal detrended cross-correlation coefficient for return interval by oriented percolation

    NASA Astrophysics Data System (ADS)

    Deng, Wei; Wang, Jun

    2015-06-01

    We investigate and quantify the multifractal detrended cross-correlation of return interval series for Chinese stock markets and a proposed price model, the price model is established by oriented percolation. The return interval describes the waiting time between two successive price volatilities which are above some threshold, the present work is an attempt to quantify the level of multifractal detrended cross-correlation for the return intervals. Further, the concept of MF-DCCA coefficient of return intervals is introduced, and the corresponding empirical research is performed. The empirical results show that the return intervals of SSE and SZSE are weakly positive multifractal power-law cross-correlated, and exhibit the fluctuation patterns of MF-DCCA coefficients. The similar behaviors of return intervals for the price model is also demonstrated.

  1. Cross-correlations between the US monetary policy, US dollar index and crude oil market

    NASA Astrophysics Data System (ADS)

    Sun, Xinxin; Lu, Xinsheng; Yue, Gongzheng; Li, Jianfeng

    2017-02-01

    This paper investigates the cross-correlations between the US monetary policy, US dollar index and WTI crude oil market, using a dataset covering a period from February 4, 1994 to February 29, 2016. Our study contributes to the literature by examining the effect of the US monetary policy on US dollar index and WTI crude oil through the MF-DCCA approach. The empirical results show that the cross-correlations between the three sets of time series exhibit strong multifractal features with the strength of multifractality increasing over the sample period. Employing a rolling window analysis, our empirical results show that the US monetary policy operations have clear influences on the cross-correlated behavior of the three time series covered by this study.

  2. Movement patterns of Tenebrio beetles demonstrate empirically that correlated-random-walks have similitude with a Lévy walk.

    PubMed

    Reynolds, Andy M; Leprêtre, Lisa; Bohan, David A

    2013-11-07

    Correlated random walks are the dominant conceptual framework for modelling and interpreting organism movement patterns. Recent years have witnessed a stream of high profile publications reporting that many organisms perform Lévy walks; movement patterns that seemingly stand apart from the correlated random walk paradigm because they are discrete and scale-free rather than continuous and scale-finite. Our new study of the movement patterns of Tenebrio molitor beetles in unchanging, featureless arenas provides the first empirical support for a remarkable and deep theoretical synthesis that unites correlated random walks and Lévy walks. It demonstrates that the two models are complementary rather than competing descriptions of movement pattern data and shows that correlated random walks are a part of the Lévy walk family. It follows from this that vast numbers of Lévy walkers could be hiding in plain sight.

  3. The Analysis of a Diet for the Human Being and the Companion Animal using Big Data in 2016

    PubMed Central

    Kang, Hye Won

    2017-01-01

    The purpose of this study was to investigate the diet tendencies of human and companion animals using big data analysis. The keyword data of human diet and companion animals' diet were collected from the portal site Naver from January 1, 2016 until December 31, 2016 and collected data were analyzed by simple frequency analysis, N-gram analysis, keyword network analysis and seasonality analysis. In terms of human, the word exercise had the highest frequency through simple frequency analysis, whereas diet menu most frequently appeared in the N-gram analysis. companion animals, the term dog had the highest frequency in simple frequency analysis, whereas diet method was most frequent through N-gram analysis. Keyword network analysis for human indicated 4 groups: diet group, exercise group, commercial diet food group, and commercial diet program group. However, the keyword network analysis for companion animals indicated 3 groups: diet group, exercise group, and professional medical help group. The analysis of seasonality showed that the interest in diet for both human and companion animals increased steadily since February of 2016 and reached its peak in July. In conclusion, diets of human and companion animals showed similar tendencies, particularly having higher preference for dietary control over other methods. The diets of companion animals are determined by the choice of their owners as effective diet method for owners are usually applied to the companion animals. Therefore, it is necessary to have empirical demonstration of whether correlation of obesity between human being and the companion animals exist. PMID:29124046

  4. The Analysis of a Diet for the Human Being and the Companion Animal using Big Data in 2016.

    PubMed

    Jung, Eun-Jin; Kim, Young-Suk; Choi, Jung-Wa; Kang, Hye Won; Chang, Un-Jae

    2017-10-01

    The purpose of this study was to investigate the diet tendencies of human and companion animals using big data analysis. The keyword data of human diet and companion animals' diet were collected from the portal site Naver from January 1, 2016 until December 31, 2016 and collected data were analyzed by simple frequency analysis, N-gram analysis, keyword network analysis and seasonality analysis. In terms of human, the word exercise had the highest frequency through simple frequency analysis, whereas diet menu most frequently appeared in the N-gram analysis. companion animals, the term dog had the highest frequency in simple frequency analysis, whereas diet method was most frequent through N-gram analysis. Keyword network analysis for human indicated 4 groups: diet group, exercise group, commercial diet food group, and commercial diet program group. However, the keyword network analysis for companion animals indicated 3 groups: diet group, exercise group, and professional medical help group. The analysis of seasonality showed that the interest in diet for both human and companion animals increased steadily since February of 2016 and reached its peak in July. In conclusion, diets of human and companion animals showed similar tendencies, particularly having higher preference for dietary control over other methods. The diets of companion animals are determined by the choice of their owners as effective diet method for owners are usually applied to the companion animals. Therefore, it is necessary to have empirical demonstration of whether correlation of obesity between human being and the companion animals exist.

  5. Empirical correlates for the Minnesota Multiphasic Personality Inventory-2-Restructured Form in a German inpatient sample.

    PubMed

    Moultrie, Josefine K; Engel, Rolf R

    2017-10-01

    We identified empirical correlates for the 42 substantive scales of the German language version of the Minnesota Multiphasic Personality Inventory (MMPI)-2-Restructured Form (MMPI-2-RF): Higher Order, Restructured Clinical, Specific Problem, Interest, and revised Personality Psychopathology Five scales. We collected external validity data by means of a 177-item chart review form in a sample of 488 psychiatric inpatients of a German university hospital. We structured our findings along the interpretational guidelines for the MMPI-2-RF and compared them with the validity data published in the tables of the MMPI-2-RF Technical Manual. Our results show significant correlations between MMPI-2-RF scales and conceptually relevant criteria. Most of the results were in line with U.S. validation studies. Some of the differences could be attributed to sample compositions. For most of the scales, construct validity coefficients were acceptable. Taken together, this study amplifies the enlarging body of research on empirical correlates of the MMPI-2-RF scales in a new sample. The study suggests that the interpretations given in the MMPI-2-RF manual may be generalizable to the German language MMPI-2-RF. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Correlation of refrigerant mass flow rate through adiabatic capillary tubes using mixture refrigerant carbondioxide and ethane for low temperature applications

    NASA Astrophysics Data System (ADS)

    Nasruddin, Syaka, Darwin R. B.; Alhamid, M. Idrus

    2012-06-01

    Various binary mixtures of carbon dioxide and hydrocarbons, especially propane or ethane, as alternative natural refrigerants to Chlorofluorocarbons (CFCs) or Hydro fluorocarbons (HFCs) are presented in this paper. Their environmental performance is friendly, with an ozone depletion potential (ODP) of zero and Global-warming potential (GWP) smaller than 20. The capillary tube performance for the alternative refrigerant HFC HCand mixed refrigerants have been widely studied. However, studies that discuss the performance of the capillary tube to a mixture of natural refrigerants, in particular a mixture of azeotrope carbon dioxide and ethane is still undeveloped. A method of empirical correlation to determine the mass flow rate and pipe length has an important role in the design of the capillary tube for industrial refrigeration. Based on the variables that effect the rate of mass flow of refrigerant in the capillary tube, the Buckingham Pi theorem formulated eight non-dimensional parameters to be developed into an empirical equations correlation. Furthermore, non-linear regression analysis used to determine the co-efficiency and exponent of this empirical correlation based on experimental verification of the results database.

  7. Local Structure Theory for Cellular Automata.

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard Andrew

    The local structure theory (LST) is a generalization of the mean field theory for cellular automata (CA). The mean field theory makes the assumption that iterative application of the rule does not introduce correlations between the states of cells in different positions. This assumption allows the derivation of a simple formula for the limit density of each possible state of a cell. The most striking feature of CA is that they may well generate correlations between the states of cells as they evolve. The LST takes the generation of correlation explicitly into account. It thus has the potential to describe statistical characteristics in detail. The basic assumption of the LST is that though correlation may be generated by CA evolution, this correlation decays with distance. This assumption allows the derivation of formulas for the estimation of the probability of large blocks of states in terms of smaller blocks of states. Given the probabilities of blocks of size n, probabilities may be assigned to blocks of arbitrary size such that these probability assignments satisfy the Kolmogorov consistency conditions and hence may be used to define a measure on the set of all possible (infinite) configurations. Measures defined in this way are called finite (or n-) block measures. A function called the scramble operator of order n maps a measure to an approximating n-block measure. The action of a CA on configurations induces an action on measures on the set of all configurations. The scramble operator is combined with the CA map on measure to form the local structure operator (LSO). The LSO of order n maps the set of n-block measures into itself. It is hypothesised that the LSO applied to n-block measures approximates the rule itself on general measures, and does so increasingly well as n increases. The fundamental advantage of the LSO is that its action is explicitly computable from a finite system of rational recursion equations. Empirical study of a number of CA rules demonstrates the potential of the LST to describe the statistical features of CA. The behavior of some simple rules is derived analytically. Other rules have more complex, chaotic behavior. Even for these rules, the LST yields an accurate portrait of both small and large time statistics.

  8. An empirical Bayes approach to analyzing recurring animal surveys

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.

  9. Forecasting runoff from Pennsylvania landscapes

    USDA-ARS?s Scientific Manuscript database

    Identifying sites prone to surface runoff has been a cornerstone of conservation and nutrient management programs, relying upon site assessment tools that support strategic, as opposed to operational, decision making. We sought to develop simple, empirical models to represent two highly different me...

  10. Shear in high strength concrete bridge girders : technical report.

    DOT National Transportation Integrated Search

    2013-04-01

    Prestressed Concrete (PC) I-girders are used extensively as the primary superstructure components in Texas highway bridges. : A simple semi-empirical equation was developed at the University of Houston (UH) to predict the shear strength of PC I-girde...

  11. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  12. Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Hines, Constance V.

    1995-01-01

    The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…

  13. Cooling tower plume - model and experiment

    NASA Astrophysics Data System (ADS)

    Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri

    The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  14. Type II Supernova Energetics and Comparison of Light Curves to Shock-cooling Models

    NASA Astrophysics Data System (ADS)

    Rubin, Adam; Gal-Yam, Avishay; De Cia, Annalisa; Horesh, Assaf; Khazov, Danny; Ofek, Eran O.; Kulkarni, S. R.; Arcavi, Iair; Manulis, Ilan; Yaron, Ofer; Vreeswijk, Paul; Kasliwal, Mansi M.; Ben-Ami, Sagi; Perley, Daniel A.; Cao, Yi; Cenko, S. Bradley; Rebbapragada, Umaa D.; Woźniak, P. R.; Filippenko, Alexei V.; Clubb, K. I.; Nugent, Peter E.; Pan, Y.-C.; Badenes, C.; Howell, D. Andrew; Valenti, Stefano; Sand, David; Sollerman, J.; Johansson, Joel; Leonard, Douglas C.; Horst, J. Chuck; Armen, Stephen F.; Fedrow, Joseph M.; Quimby, Robert M.; Mazzali, Paulo; Pian, Elena; Sternberg, Assaf; Matheson, Thomas; Sullivan, M.; Maguire, K.; Lazarevic, Sanja

    2016-03-01

    During the first few days after explosion, Type II supernovae (SNe) are dominated by relatively simple physics. Theoretical predictions regarding early-time SN light curves in the ultraviolet (UV) and optical bands are thus quite robust. We present, for the first time, a sample of 57 R-band SN II light curves that are well-monitored during their rise, with \\gt 5 detections during the first 10 days after discovery, and a well-constrained time of explosion to within 1-3 days. We show that the energy per unit mass (E/M) can be deduced to roughly a factor of five by comparing early-time optical data to the 2011 model of Rabinak & Waxman, while the progenitor radius cannot be determined based on R-band data alone. We find that SN II explosion energies span a range of E/M = (0.2-20) × 1051 erg/(10 {M}⊙ ), and have a mean energy per unit mass of < E/M> =0.85× {10}51 erg/(10 {M}⊙ ), corrected for Malmquist bias. Assuming a small spread in progenitor masses, this indicates a large intrinsic diversity in explosion energy. Moreover, E/M is positively correlated with the amount of 56Ni produced in the explosion, as predicted by some recent models of core-collapse SNe. We further present several empirical correlations. The peak magnitude is correlated with the decline rate ({{Δ }}{m}15), the decline rate is weakly correlated with the rise time, and the rise time is not significantly correlated with the peak magnitude. Faster declining SNe are more luminous and have longer rise times. This limits the possible power sources for such events.

  15. Type II supernova energetics and comparison of light curves to shock-cooling models

    DOE PAGES

    Rubin, Adam; Gal-Yam, Avishay; De Cia, Annalisa; ...

    2016-03-16

    During the first few days after explosion, Type II supernovae (SNe) are dominated by relatively simple physics. Theoretical predictions regarding early-time SN light curves in the ultraviolet (UV) and optical bands are thus quite robust. We present, for the first time, a sample of 57 R-band SN II light curves that are well-monitored during their rise, withmore » $$\\gt 5$$ detections during the first 10 days after discovery, and a well-constrained time of explosion to within 1–3 days. We show that the energy per unit mass (E/M) can be deduced to roughly a factor of five by comparing early-time optical data to the 2011 model of Rabinak & Waxman, while the progenitor radius cannot be determined based on R-band data alone. We find that SN II explosion energies span a range of E/M = (0.2–20) × 10 51 erg/(10 $${M}_{\\odot }$$), and have a mean energy per unit mass of $$\\langle E/M\\rangle =0.85\\times {10}^{51}$$ erg/(10 $${M}_{\\odot }$$), corrected for Malmquist bias. Assuming a small spread in progenitor masses, this indicates a large intrinsic diversity in explosion energy. Moreover, E/M is positively correlated with the amount of 56Ni produced in the explosion, as predicted by some recent models of core-collapse SNe. We further present several empirical correlations. The peak magnitude is correlated with the decline rate ($${\\rm{\\Delta }}{m}_{15}$$), the decline rate is weakly correlated with the rise time, and the rise time is not significantly correlated with the peak magnitude. Faster declining SNe are more luminous and have longer rise times. Lastly, this limits the possible power sources for such events.« less

  16. Type II Supernova Energetics and Comparison of Light Curves to Shock-Cooling Models

    NASA Technical Reports Server (NTRS)

    Rubin, Adam; Gal-Yam, Avishay; Cia, Annalisa De; Horesh, Assaf; Khazov, Danny; Ofek, Eran O.; Kulkarni, S. R.; Arcavi, Iair; Manulis, Ilan; Cenko, S. Bradley

    2016-01-01

    During the first few days after explosion, Type II supernovae (SNe) are dominated by relatively simple physics. Theoretical predictions regarding early-time SN light curves in the ultraviolet (UV) and optical bands are thus quite robust. We present, for the first time, a sample of 57 R-band SN II light curves that are well-monitored during their rise, with greater than 5 detections during the first 10 days after discovery, and a well-constrained time of explosion to within 13 days. We show that the energy per unit mass (E/M) can be deduced to roughly a factor of five by comparing early-time optical data to the 2011 model of Rabinak Waxman, while the progenitor radius cannot be determined based on R-band data alone. We find that SN II explosion energies span a range of EM = (0.2-20) x 10(exp 51) erg/(10 M stellar mass), and have a mean energy per unit mass of E/ M = 0.85 x 10(exp 51) erg(10 stellar mass), corrected for Malmquist bias. Assuming a small spread in progenitor masses, this indicates a large intrinsic diversity in explosion energy. Moreover, E/M is positively correlated with the amount of Ni-56 produced in the explosion, as predicted by some recent models of core-collapse SNe. We further present several empirical correlations. The peak magnitude is correlated with the decline rate (Delta m(sub15), the decline rate is weakly correlated with the rise time, and the rise time is not significantly correlated with the peak magnitude. Faster declining SNe are more luminous and have longer rise times. This limits the possible power sources for such events.

  17. A framework for studying transient dynamics of population projection matrix models.

    PubMed

    Stott, Iain; Townley, Stuart; Hodgson, David James

    2011-09-01

    Empirical models are central to effective conservation and population management, and should be predictive of real-world dynamics. Available modelling methods are diverse, but analysis usually focuses on long-term dynamics that are unable to describe the complicated short-term time series that can arise even from simple models following ecological disturbances or perturbations. Recent interest in such transient dynamics has led to diverse methodologies for their quantification in density-independent, time-invariant population projection matrix (PPM) models, but the fragmented nature of this literature has stifled the widespread analysis of transients. We review the literature on transient analyses of linear PPM models and synthesise a coherent framework. We promote the use of standardised indices, and categorise indices according to their focus on either convergence times or transient population density, and on either transient bounds or case-specific transient dynamics. We use a large database of empirical PPM models to explore relationships between indices of transient dynamics. This analysis promotes the use of population inertia as a simple, versatile and informative predictor of transient population density, but criticises the utility of established indices of convergence times. Our findings should guide further development of analyses of transient population dynamics using PPMs or other empirical modelling techniques. © 2011 Blackwell Publishing Ltd/CNRS.

  18. Price-volume multifractal analysis and its application in Chinese stock markets

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Liu, Zhi-ying

    2012-06-01

    An empirical research on Chinese stock markets is conducted using statistical tools. First, the multifractality of stock price return series, ri(ri=ln(Pt+1)-ln(Pt)) and trading volume variation series, vi(vi=ln(Vt+1)-ln(Vt)) is confirmed using multifractal detrended fluctuation analysis. Furthermore, a multifractal detrended cross-correlation analysis between stock price return and trading volume variation in Chinese stock markets is also conducted. It is shown that the cross relationship between them is also found to be multifractal. Second, the cross-correlation between stock price Pi and trading volume Vi is empirically studied using cross-correlation function and detrended cross-correlation analysis. It is found that both Shanghai stock market and Shenzhen stock market show pronounced long-range cross-correlations between stock price and trading volume. Third, a composite index R based on price and trading volume is introduced. Compared with stock price return series ri and trading volume variation series vi, R variation series not only remain the characteristics of original series but also demonstrate the relative correlation between stock price and trading volume. Finally, we analyze the multifractal characteristics of R variation series before and after three financial events in China (namely, Price Limits, Reform of Non-tradable Shares and financial crisis in 2008) in the whole period of sample to study the changes of stock market fluctuation and financial risk. It is found that the empirical results verified the validity of R.

  19. Deriving simple empirical relationships between aerodynamic and optical aerosol measurements and their application

    USDA-ARS?s Scientific Manuscript database

    Different measurement techniques for aerosol characterization and quantification either directly or indirectly measure different aerosol properties (i.e. count, mass, speciation, etc.). Comparisons and combinations of multiple measurement techniques sampling the same aerosol can provide insight into...

  20. Characterization of Louisiana asphalt mixtures using simple performance tests and MEPDG : tech summary.

    DOT National Transportation Integrated Search

    2014-04-01

    The Federal Highway Administrations 1995-1997 National Pavement Design Review found that nearly 80 percent of : states use the 1972, 1986, or 1993 AASHTO Design Guides. These design guides are relying on empirical relationships : between paving ma...

  1. Regression Analysis by Example. 5th Edition

    ERIC Educational Resources Information Center

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  2. U.S. ENVIRONMENTAL PROTECTION AGENCY'S LANDFILL GAS EMISSION MODEL (LANDGEM)

    EPA Science Inventory

    The paper discusses EPA's available software for estimating landfill gas emissions. This software is based on a first-order decomposition rate equation using empirical data from U.S. landfills. The software provides a relatively simple approach to estimating landfill gas emissi...

  3. ASSESSMENT OF SPATIAL AUTOCORRELATION IN EMPIRICAL MODELS IN ECOLOGY

    EPA Science Inventory

    Statistically assessing ecological models is inherently difficult because data are autocorrelated and this autocorrelation varies in an unknown fashion. At a simple level, the linking of a single species to a habitat type is a straightforward analysis. With some investigation int...

  4. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  5. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical.

    PubMed

    Baaquie, Belal E; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  6. The conceptual and empirical relationship between gambling, investing, and speculation

    PubMed Central

    Arthur, Jennifer N.; Williams, Robert J.; Delfabbro, Paul H.

    2016-01-01

    Background and aims To review the conceptual and empirical relationship between gambling, investing, and speculation. Methods An analysis of the attributes differentiating these constructs as well as identification of all articles speaking to their empirical relationship. Results Gambling differs from investment on many different attributes and should be seen as conceptually distinct. On the other hand, speculation is conceptually intermediate between gambling and investment, with a few of its attributes being investment-like, some of its attributes being gambling-like, and several of its attributes being neither clearly gambling or investment-like. Empirically, gamblers, investors, and speculators have similar cognitive, motivational, and personality attributes, with this relationship being particularly strong for gambling and speculation. Population levels of gambling activity also tend to be correlated with population level of financial speculation. At an individual level, speculation has a particularly strong empirical relationship to gambling, as speculators appear to be heavily involved in traditional forms of gambling and problematic speculation is strongly correlated with problematic gambling. Discussion and conclusions Investment is distinct from gambling, but speculation and gambling have conceptual overlap and a strong empirical relationship. It is recommended that financial speculation be routinely included when assessing gambling involvement, and there needs to be greater recognition and study of financial speculation as both a contributor to problem gambling as well as an additional form of behavioral addiction in its own right. PMID:27929350

  7. Semi-empirical long-term cycle life model coupled with an electrolyte depletion function for large-format graphite/LiFePO4 lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min

    2017-10-01

    To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.

  8. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    NASA Astrophysics Data System (ADS)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  9. Rationalizing the photophysical properties of BODIPY laser dyes via aromaticity and electron-donor-based structural perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waddell, Paul G.; Liu, Xiaogang; Zhao, Teng

    2015-05-01

    The absorption and fluorescence properties of six boron dipyrromethene (BODIPY) laser dyes with simple non-aromatic substituents are rationalized by relating them to observable structural perturbations within the molecules of the dyes. An empirical relationship involving the structure and the optical properties is derived using a combination of single-crystal X-ray diffraction data, quantum chemical calculations and electronic constants: i.e. the tendency of the pyrrole bond lengths towards aromaticity and the UV-vis absorption and fluorescence wavelengths correlating with the electron-donor properties of the substituents. The effect of molecular conformation on the solid-state optical properties of the dyes is also discussed. The findingsmore » in this study also demonstrate the usefulness and limitations of using crystal structure data to develop structure-property relationships in this class of optical materials, contributing to the growing effort to design optoelectronic materials with tunable properties via molecular engineering.« less

  10. Scrutinizing a Survey-Based Measure of Science and Mathematics Teacher Knowledge: Relationship to Observations of Teaching Practice

    NASA Astrophysics Data System (ADS)

    Talbot, Robert M.

    2017-12-01

    There is a clear need for valid and reliable instrumentation that measures teacher knowledge. However, the process of investigating and making a case for instrument validity is not a simple undertaking; rather, it is a complex endeavor. This paper presents the empirical case of one aspect of such an instrument validation effort. The particular instrument under scrutiny was developed in order to determine the effect of a teacher education program on novice science and mathematics teachers' strategic knowledge (SK). The relationship between novice science and mathematics teachers' SK as measured by a survey and their SK as inferred from observations of practice using a widely used observation protocol is the subject of this paper. Moderate correlations between parts of the observation-based construct and the SK construct were observed. However, the main finding of this work is that the context in which the measurement is made (in situ observations vs. ex situ survey) is an essential factor in establishing the validity of the measurement itself.

  11. Noise from Supersonic Coaxial Jets. Part 1; Mean Flow Predictions

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Morris, Philip J.

    1997-01-01

    Recent theories for supersonic jet noise have used an instability wave noise generation model to predict radiated noise. This model requires a known mean flow that has typically been described by simple analytic functions for single jet mean flows. The mean flow of supersonic coaxial jets is not described easily in terms of analytic functions. To provide these profiles at all axial locations, a numerical scheme is developed to calculate the mean flow properties of a coaxial jet. The Reynolds-averaged, compressible, parabolic boundary layer equations are solved using a mixing length turbulence model. Empirical correlations are developed to account for the effects of velocity and temperature ratios and Mach number on the shear layer spreading. Both normal velocity profile and inverted velocity profile coaxial jets are considered. The mixing length model is modified in each case to obtain reasonable results when the two stream jet merges into a single fully developed jet. The mean flow calculations show both good qualitative and quantitative agreement with measurements in single and coaxial jet flows.

  12. A numerical method for computing unsteady 2-D boundary layer flows

    NASA Technical Reports Server (NTRS)

    Krainer, Andreas

    1988-01-01

    A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.

  13. Short-ranged memory model with preferential growth

    NASA Astrophysics Data System (ADS)

    Schaigorodsky, Ana L.; Perotti, Juan I.; Almeira, Nahuel; Billoni, Orlando V.

    2018-02-01

    In this work we introduce a variant of the Yule-Simon model for preferential growth by incorporating a finite kernel to model the effects of bounded memory. We characterize the properties of the model combining analytical arguments with extensive numerical simulations. In particular, we analyze the lifetime and popularity distributions by mapping the model dynamics to corresponding Markov chains and branching processes, respectively. These distributions follow power laws with well-defined exponents that are within the range of the empirical data reported in ecologies. Interestingly, by varying the innovation rate, this simple out-of-equilibrium model exhibits many of the characteristics of a continuous phase transition and, around the critical point, it generates time series with power-law popularity, lifetime and interevent time distributions, and nontrivial temporal correlations, such as a bursty dynamics in analogy with the activity of solar flares. Our results suggest that an appropriate balance between innovation and oblivion rates could provide an explanatory framework for many of the properties commonly observed in many complex systems.

  14. Short-ranged memory model with preferential growth.

    PubMed

    Schaigorodsky, Ana L; Perotti, Juan I; Almeira, Nahuel; Billoni, Orlando V

    2018-02-01

    In this work we introduce a variant of the Yule-Simon model for preferential growth by incorporating a finite kernel to model the effects of bounded memory. We characterize the properties of the model combining analytical arguments with extensive numerical simulations. In particular, we analyze the lifetime and popularity distributions by mapping the model dynamics to corresponding Markov chains and branching processes, respectively. These distributions follow power laws with well-defined exponents that are within the range of the empirical data reported in ecologies. Interestingly, by varying the innovation rate, this simple out-of-equilibrium model exhibits many of the characteristics of a continuous phase transition and, around the critical point, it generates time series with power-law popularity, lifetime and interevent time distributions, and nontrivial temporal correlations, such as a bursty dynamics in analogy with the activity of solar flares. Our results suggest that an appropriate balance between innovation and oblivion rates could provide an explanatory framework for many of the properties commonly observed in many complex systems.

  15. Stylized facts in social networks: Community-based static modeling

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo

    2018-06-01

    The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.

  16. Assessment of Simple Models for Molecular Simulation of Ethylene Carbonate and Propylene Carbonate as Solvents for Electrolyte Solutions.

    PubMed

    Chaudhari, Mangesh I; Muralidharan, Ajay; Pratt, Lawrence R; Rempe, Susan B

    2018-02-12

    Progress in understanding liquid ethylene carbonate (EC) and propylene carbonate (PC) on the basis of molecular simulation, emphasizing simple models of interatomic forces, is reviewed. Results on the bulk liquids are examined from the perspective of anticipated applications to materials for electrical energy storage devices. Preliminary results on electrochemical double-layer capacitors based on carbon nanotube forests and on model solid-electrolyte interphase (SEI) layers of lithium ion batteries are considered as examples. The basic results discussed suggest that an empirically parameterized, non-polarizable force field can reproduce experimental structural, thermodynamic, and dielectric properties of EC and PC liquids with acceptable accuracy. More sophisticated force fields might include molecular polarizability and Buckingham-model description of inter-atomic overlap repulsions as extensions to Lennard-Jones models of van der Waals interactions. Simple approaches should be similarly successful also for applications to organic molecular ions in EC/PC solutions, but the important case of Li[Formula: see text] deserves special attention because of the particularly strong interactions of that small ion with neighboring solvent molecules. To treat the Li[Formula: see text] ions in liquid EC/PC solutions, we identify interaction models defined by empirically scaled partial charges for ion-solvent interactions. The empirical adjustments use more basic inputs, electronic structure calculations and ab initio molecular dynamics simulations, and also experimental results on Li[Formula: see text] thermodynamics and transport in EC/PC solutions. Application of such models to the mechanism of Li[Formula: see text] transport in glassy SEI models emphasizes the advantage of long time-scale molecular dynamics studies of these non-equilibrium materials.

  17. An empirical method for approximating stream baseflow time series using groundwater table fluctuations

    NASA Astrophysics Data System (ADS)

    Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May

    2014-11-01

    Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.

  18. Understanding hind limb lameness signs in horses using simple rigid body mechanics.

    PubMed

    Starke, S D; May, S A; Pfau, T

    2015-09-18

    Hind limb lameness detection in horses relies on the identification of movement asymmetry which can be based on multiple pelvic landmarks. This study explains the poorly understood relationship between hind limb lameness pointers, related to the tubera coxae and sacrum, based on experimental data in context of a simple rigid body model. Vertical displacement of tubera coxae and sacrum was quantified experimentally in 107 horses with varying lameness degrees. A geometrical rigid-body model of pelvis movement during lameness was created in Matlab. Several asymmetry measures were calculated and contrasted. Results showed that model predictions for tubera coxae asymmetry during lameness matched experimental observations closely. Asymmetry for sacrum and comparative tubera coxae movement showed a strong association both empirically (R(2)≥ 0.92) and theoretically. We did not find empirical or theoretical evidence for a systematic, pronounced adaptation in the pelvic rotation pattern with increasing lameness. The model showed that the overall range of movement between tubera coxae does not allow the appreciation of asymmetry changes beyond mild lameness. When evaluating movement relative to the stride cycle we did find empirical evidence for asymmetry being slightly more visible when comparing tubera coxae amplitudes rather than sacrum amplitudes, although variation exists for mild lameness. In conclusion, the rigidity of the equine pelvis results in tightly linked movement trajectories of different pelvic landmarks. The model allows the explanation of empirical observations in the context of the underlying mechanics, helping the identification of potentially limited assessment choices when evaluating gait. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Variance-based selection may explain general mating patterns in social insects.

    PubMed

    Rueppell, Olav; Johnson, Nels; Rychtár, Jan

    2008-06-23

    Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.

  20. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  1. Prediction of Very High Reynolds Number Compressible Skin Friction

    NASA Technical Reports Server (NTRS)

    Carlson, John R.

    1998-01-01

    Flat plate skin friction calculations over a range of Mach numbers from 0.4 to 3.5 at Reynolds numbers from 16 million to 492 million using a Navier Stokes method with advanced turbulence modeling are compared with incompressible skin friction coefficient correlations. The semi-empirical correlation theories of van Driest; Cope; Winkler and Cha; and Sommer and Short T' are used to transform the predicted skin friction coefficients of solutions using two algebraic Reynolds stress turbulence models in the Navier-Stokes method PAB3D. In general, the predicted skin friction coefficients scaled well with each reference temperature theory though, overall the theory by Sommer and Short appeared to best collapse the predicted coefficients. At the lower Reynolds number 3 to 30 million, both the Girimaji and Shih, Zhu and Lumley turbulence models predicted skin-friction coefficients within 2% of the semi-empirical correlation skin friction coefficients. At the higher Reynolds numbers of 100 to 500 million, the turbulence models by Shih, Zhu and Lumley and Girimaji predicted coefficients that were 6% less and 10% greater, respectively, than the semi-empirical coefficients.

  2. Methods for assessing long-term mean pathogen count in drinking water and risk management implications.

    PubMed

    Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y

    2012-06-01

    Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.

  3. Mining The Sdss-moc Database For Main-belt Asteroid Solar Phase Behavior.

    NASA Astrophysics Data System (ADS)

    Truong, Thien-Tin; Hicks, M. D.

    2010-10-01

    The 4th Release of the Sloan Digital Sky Survey Moving Object Catalog (SDSS-MOC) contains 471569 moving object detections from 519 observing runs obtained up to March 2007. Of these, 220101 observations were linked with 104449 known small bodies, with 2150 asteroids sampled at least 10 times. It is our goal to mine this database in order to extract solar phase curve information for a large number of main-belt asteroids of different dynamical and taxonomic classes. We found that a simple linear phase curve fit allowed us to reject data contaminated by intrinsic rotational lightcurves and other effects. As expected, a running mean of solar phase coefficient is strongly correlated with orbital elements, with the inner main-belt dominated by bright S-type asteroids and transitioning to darker C and D-type asteroids with steeper solar phase slopes. We shall fit the empirical H-G model to our 2150 multi-sampled asteroids and correlate these parameters with spectral type derived from the SDSS colors and position within the asteroid belt. Our data should also allow us to constrain solar phase reddening for a variety of taxonomic classes. We shall discuss errors induced by the standard "g=0.15" assumption made in absolute magnitude determination, which may slightly affect number-size distribution models.

  4. Phylogenetic analysis reveals positive correlations between adaptations to diverse hosts in a group of pathogen-like herbivores.

    PubMed

    Peterson, Daniel A; Hardy, Nate B; Morse, Geoffrey E; Stocks, Ian C; Okusu, Akiko; Normark, Benjamin B

    2015-10-01

    A jack of all trades can be master of none-this intuitive idea underlies most theoretical models of host-use evolution in plant-feeding insects, yet empirical support for trade-offs in performance on distinct host plants is weak. Trade-offs may influence the long-term evolution of host use while being difficult to detect in extant populations, but host-use evolution may also be driven by adaptations for generalism. Here we used host-use data from insect collection records to parameterize a phylogenetic model of host-use evolution in armored scale insects, a large family of plant-feeding insects with a simple, pathogen-like life history. We found that a model incorporating positive correlations between evolutionary changes in host performance best fit the observed patterns of diaspidid presence and absence on nearly all focal host taxa, suggesting that adaptations to particular hosts also enhance performance on other hosts. In contrast to the widely invoked trade-off model, we advocate a "toolbox" model of host-use evolution in which armored scale insects accumulate a set of independent genetic tools, each of which is under selection for a single function but may be useful on multiple hosts. © 2015 The Author(s).

  5. Geopressure modeling from petrophysical data: An example from East Kalimantan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herkommer, M.A.

    1994-07-01

    Localized models of abnormal formation pressure (geopressure) are important economic and safety tools frequently used for well planning and drilling operations. Simplified computer-based procedures have been developed that permit these models to be developed more rapidly and with greater accuracy. These techniques are broadly applicable to basins throughout the world where abnormal formation pressures occur. An example from the Attaka field of East Kalimantan, southeast Asia, shows how geopressure models are developed. Using petrophysical and engineering data, empirical correlations between observed pressure and petrophysical logs can be created by computer-assisted data-fitting techniques. These correlations serve as the basis for modelsmore » of the geopressure. By performing repeated analyses on wells at various locations, contour maps on the top of abnormal geopressure can be created. Methods that are simple in their development and application make the task of geopressure estimation less formidable to the geologist and petroleum engineer. Further, more accurate estimates can significantly improve drilling speeds while reducing the incidence of stuck pipe, kicks, and blowouts. In general, geopressure estimates are used in all phases of drilling operations: To develop mud plans and specify equipment ratings, to assist in the recognition of geopressured formations and determination of mud weights, and to improve predictions at offset locations and geologically comparable areas.« less

  6. Potential Improvements to Remote Primary Productivity Estimation in the Southern California Current System

    NASA Astrophysics Data System (ADS)

    Jacox, M.; Edwards, C. A.; Kahru, M.; Rudnick, D. L.; Kudela, R. M.

    2012-12-01

    A 26-year record of depth integrated primary productivity (PP) in the Southern California Current System (SCCS) is analyzed with the goal of improving satellite net primary productivity (PP) estimates. The ratio of integrated primary productivity to surface chlorophyll correlates strongly to surface chlorophyll concentration (chl0). However, chl0 does not correlate to chlorophyll-specific productivity, and appears to be a proxy for vertical phytoplankton distribution rather than phytoplankton physiology. Modest improvements in PP model performance are achieved by tuning existing algorithms for the SCCS, particularly by empirical parameterization of photosynthetic efficiency in the Vertically Generalized Production Model. Much larger improvements are enabled by improving accuracy of subsurface chlorophyll and light profiles. In a simple vertically resolved production model, substitution of in situ surface data for remote sensing estimates offers only marginal improvements in model r2 and total log10 root mean squared difference, while inclusion of in situ chlorophyll and light profiles improves these metrics significantly. Autonomous underwater gliders, capable of measuring subsurface fluorescence on long-term, long-range deployments, significantly improve PP model fidelity in the SCCS. We suggest their use (and that of other autonomous profilers such as Argo floats) in conjunction with satellites as a way forward for improved PP estimation in coastal upwelling systems.

  7. Statistical analysis of co-occurrence patterns in microbial presence-absence datasets.

    PubMed

    Mainali, Kumar P; Bewick, Sharon; Thielen, Peter; Mehoke, Thomas; Breitwieser, Florian P; Paudel, Shishir; Adhikari, Arjun; Wolfe, Joshua; Slud, Eric V; Karig, David; Fagan, William F

    2017-01-01

    Drawing on a long history in macroecology, correlation analysis of microbiome datasets is becoming a common practice for identifying relationships or shared ecological niches among bacterial taxa. However, many of the statistical issues that plague such analyses in macroscale communities remain unresolved for microbial communities. Here, we discuss problems in the analysis of microbial species correlations based on presence-absence data. We focus on presence-absence data because this information is more readily obtainable from sequencing studies, especially for whole-genome sequencing, where abundance estimation is still in its infancy. First, we show how Pearson's correlation coefficient (r) and Jaccard's index (J)-two of the most common metrics for correlation analysis of presence-absence data-can contradict each other when applied to a typical microbiome dataset. In our dataset, for example, 14% of species-pairs predicted to be significantly correlated by r were not predicted to be significantly correlated using J, while 37.4% of species-pairs predicted to be significantly correlated by J were not predicted to be significantly correlated using r. Mismatch was particularly common among species-pairs with at least one rare species (<10% prevalence), explaining why r and J might differ more strongly in microbiome datasets, where there are large numbers of rare taxa. Indeed 74% of all species-pairs in our study had at least one rare species. Next, we show how Pearson's correlation coefficient can result in artificial inflation of positive taxon relationships and how this is a particular problem for microbiome studies. We then illustrate how Jaccard's index of similarity (J) can yield improvements over Pearson's correlation coefficient. However, the standard null model for Jaccard's index is flawed, and thus introduces its own set of spurious conclusions. We thus identify a better null model based on a hypergeometric distribution, which appropriately corrects for species prevalence. This model is available from recent statistics literature, and can be used for evaluating the significance of any value of an empirically observed Jaccard's index. The resulting simple, yet effective method for handling correlation analysis of microbial presence-absence datasets provides a robust means of testing and finding relationships and/or shared environmental responses among microbial taxa.

  8. A simple threshold rule is sufficient to explain sophisticated collective decision-making.

    PubMed

    Robinson, Elva J H; Franks, Nigel R; Ellis, Samuel; Okuda, Saki; Marshall, James A R

    2011-01-01

    Decision-making animals can use slow-but-accurate strategies, such as making multiple comparisons, or opt for simpler, faster strategies to find a 'good enough' option. Social animals make collective decisions about many group behaviours including foraging and migration. The key to the collective choice lies with individual behaviour. We present a case study of a collective decision-making process (house-hunting ants, Temnothorax albipennis), in which a previously proposed decision strategy involved both quality-dependent hesitancy and direct comparisons of nests by scouts. An alternative possible decision strategy is that scouting ants use a very simple quality-dependent threshold rule to decide whether to recruit nest-mates to a new site or search for alternatives. We use analytical and simulation modelling to demonstrate that this simple rule is sufficient to explain empirical patterns from three studies of collective decision-making in ants, and can account parsimoniously for apparent comparison by individuals and apparent hesitancy (recruitment latency) effects, when available nests differ strongly in quality. This highlights the need to carefully design experiments to detect individual comparison. We present empirical data strongly suggesting that best-of-n comparison is not used by individual ants, although individual sequential comparisons are not ruled out. However, by using a simple threshold rule, decision-making groups are able to effectively compare options, without relying on any form of direct comparison of alternatives by individuals. This parsimonious mechanism could promote collective rationality in group decision-making.

  9. A simple model of bipartite cooperation for ecological and organizational networks.

    PubMed

    Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian

    2009-01-22

    In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.

  10. DUTIR at TREC 2009: Chemical IR Track

    DTIC Science & Technology

    2009-11-01

    We set the Dirichlet prior empirically at 1,500 as recommended in [2]. For example, Topic 15 “ Betaines for peripheral arterial disease” is...converted into the following Indri query: # (combine betaines for peripheral arterial disease ) which produces results rank-equivalent to a simple query

  11. Empirical Studies of Patterning

    ERIC Educational Resources Information Center

    Pasnak, Robert

    2017-01-01

    Young children have been taught simple sequences of alternating shapes and colors, referred to as "patterning", for the past half century in the hope that their understanding of pre-algebra and their mathematics achievement would be improved. The evidence that such patterning instruction actually improves children's academic achievement…

  12. Contrast Analysis: A Tutorial

    ERIC Educational Resources Information Center

    Haans, Antal

    2018-01-01

    Contrast analysis is a relatively simple but effective statistical method for testing theoretical predictions about differences between group means against the empirical data. Despite its advantages, contrast analysis is hardly used to date, perhaps because it is not implemented in a convenient manner in many statistical software packages. This…

  13. Digit reversal in children's writing: a simple theory and its empirical validation.

    PubMed

    Fischer, Jean-Paul

    2013-06-01

    This article presents a simple theory according to which the left-right reversal of single digits by 5- and 6-year-old children is mainly due to the application of an implicit right-writing or -orienting rule. A number of nontrivial predictions can be drawn from this theory. First, left-oriented digits (1, 2, 3, 7, and 9) will be reversed more frequently than the other asymmetrical digits (4, 5, and 6). Second, for some pairs of digits, the correct writing of the preceding digit will statistically predict the reversal of the current digit and vice versa. Third, writing hand will have little effect on the frequency of reversals, and the relative frequencies with which children reverse the asymmetrical digits will be similar regardless of children's preferred writing hand. Fourth, children who reverse the left-oriented digits the most are also those who reverse the other asymmetrical digits the least. An empirical study involving 367 5- and 6-year-olds confirmed these predictions. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Fault identification of rotor-bearing system based on ensemble empirical mode decomposition and self-zero space projection analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan

    2014-07-01

    Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.

  15. A Simple Principled Approach for Modeling and Understanding Uniform Color Metrics

    PubMed Central

    Smet, Kevin A.G.; Webster, Michael A.; Whitehead, Lorne A.

    2016-01-01

    An important goal in characterizing human color vision is to order color percepts in a way that captures their similarities and differences. This has resulted in the continuing evolution of “uniform color spaces,” in which the distances within the space represent the perceptual differences between the stimuli. While these metrics are now very successful in predicting how color percepts are scaled, they do so in largely empirical, ad hoc ways, with limited reference to actual mechanisms of color vision. In this article our aim is to instead begin with general and plausible assumptions about color coding, and then develop a model of color appearance that explicitly incorporates them. We show that many of the features of empirically-defined color order systems (such as those of Munsell, Pantone, NCS, and others) as well as many of the basic phenomena of color perception, emerge naturally from fairly simple principles of color information encoding in the visual system and how it can be optimized for the spectral characteristics of the environment. PMID:26974939

  16. Measures against mechanical noise from large wind turbines: A design guide

    NASA Astrophysics Data System (ADS)

    Ljunggren, Sten; Johansson, Melker

    1991-06-01

    The noise generated by the machinery of the two Swedish prototypes contains pure tones which are very important with respect to the environmental impact. A discussion of the results of noise measurements carried out at these turbines, that are meant to be used as a guide as to how to predict and control the noise around a large wind turbine during the design stage, is presented. The design targets are discussed, stressing the importance of the audibility of pure tones and not only the annoyance; a simple criterion is cited. The main noise source is the gearbox and a simple empirical expression for the sound power level is shown to give good agreement with the measurement results. The influence of the noise of the gearbox design is discussed in some detail. Formulas for the prediction of the airborne sound transmission to the ground outside the nacelle are presented, together with a number of empirical data on the sound reduction indices for single and double constructions. The structure-borne noise transmission is discussed.

  17. Social attention with real versus reel stimuli: toward an empirical approach to concerns about ecological validity

    PubMed Central

    Risko, Evan F.; Laidlaw, Kaitlin E. W.; Freeth, Megan; Foulsham, Tom; Kingstone, Alan

    2012-01-01

    Cognitive neuroscientists often study social cognition by using simple but socially relevant stimuli, such as schematic faces or images of other people. Whilst this research is valuable, important aspects of genuine social encounters are absent from these studies, a fact that has recently drawn criticism. In the present review we argue for an empirical approach to the determination of the equivalence of different social stimuli. This approach involves the systematic comparison of different types of social stimuli ranging in their approximation to a real social interaction. In garnering support for this cognitive ethological approach, we focus on recent research in social attention that has involved stimuli ranging from simple schematic faces to real social interactions. We highlight both meaningful similarities and differences in various social attentional phenomena across these different types of social stimuli thus validating the utility of the research initiative. Furthermore, we argue that exploring these similarities and differences will provide new insights into social cognition and social neuroscience. PMID:22654747

  18. Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module

    NASA Astrophysics Data System (ADS)

    Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge

    2017-04-01

    A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.

  19. Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe

    2013-05-01

    Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.

  20. An empirical description of the dispersion of 5th and 95th percentiles in worldwide anthropometric data applied to estimating accommodation with unknown correlation values.

    PubMed

    Albin, Thomas J; Vink, Peter

    2015-01-01

    Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.

  1. Jet Aeroacoustics: Noise Generation Mechanism and Prediction

    NASA Technical Reports Server (NTRS)

    Tam, Christopher

    1998-01-01

    This report covers the third year research effort of the project. The research work focussed on the fine scale mixing noise of both subsonic and supersonic jets and the effects of nozzle geometry and tabs on subsonic jet noise. In publication 1, a new semi-empirical theory of jet mixing noise from fine scale turbulence is developed. By an analogy to gas kinetic theory, it is shown that the source of noise is related to the time fluctuations of the turbulence kinetic theory. On starting with the Reynolds Averaged Navier-Stokes equations, a formula for the radiated noise is derived. An empirical model of the space-time correlation function of the turbulence kinetic energy is adopted. The form of the model is in good agreement with the space-time two-point velocity correlation function measured by Davies and coworkers. The parameters of the correlation are related to the parameters of the k-epsilon turbulence model. Thus the theory is self-contained. Extensive comparisons between the computed noise spectrum of the theory and experimental measured have been carried out. The parameters include jet Mach number from 0.3 to 2.0 and temperature ratio from 1.0 to 4.8. Excellent agreements are found in the spectrum shape, noise intensity and directivity. It is envisaged that the theory would supercede all semi-empirical and totally empirical jet noise prediction methods in current use.

  2. Electronic structure and glass forming ability in early and late transition metal alloys

    NASA Astrophysics Data System (ADS)

    Babić, E.; Ristić, R.; Figueroa, I. A.; Pajić, D.; Skoko, Ž.; Zadro, K.

    2018-03-01

    A correlation between the change in magnetic susceptibility (Δχexp) upon crystallisation of Cu-Zr and Hf metallic glasses (MG) with their glass forming ability (GFA) observed recently, is found to apply to Cu-Ti and Zr-Ni alloys, too. In particular, small Δχexp, which reflects similar electronic structures, ES, of glassy and corresponding crystalline alloys, corresponds to high GFA. Here, we studied Δχexp for five Cu-Ti and four Cu-Zr and Ni-Zr MGs. The fully crystalline final state of all alloys was verified from X-ray diffraction patterns. The variation of GFA with composition in Cu-Ti, Cu-Zr and Cu-Hf MGs was established from the variation of the corresponding critical casting thickness, dc. Due to the absence of data for dc in Ni-Zr MGs their GFA was described using empirical criteria, such as the reduced glass transition temperature. A very good correlation between Δχexp and dc (and/or other criteria for GFA) was observed for all alloys studied. The correlation between the ES and GFA showed up best for Cu-Zr and NiZr2 alloys where direct data for the change in ES (ΔES) upon crystallisation are available. The applicability of the Δχexp (ΔES) criterion for high GFA (which provides a simple way to select the compositions with high GFA) to other metal-metal MGs (including ternary and multicomponent bulk MGs) is briefly discussed.

  3. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad

    2014-06-01

    Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  4. Retaining Early Childhood Education Workers: A Review of the Empirical Literature

    ERIC Educational Resources Information Center

    Totenhagen, Casey J.; Hawkins, Stacy Ann; Casper, Deborah M.; Bosch, Leslie A.; Hawkey, Kyle R.; Borden, Lynne M.

    2016-01-01

    Low retention in the child care workforce is a persistent challenge that has been associated with negative outcomes for children, staff, and centers. This article reviews the empirical literature, identifying common correlates or predictors of retention for child care workers. Searches were conducted using several databases, and articles that…

  5. Empirical data and the variance-covariance matrix for the 1969 Smithsonian Standard Earth (2)

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1972-01-01

    The empirical data used in the 1969 Smithsonian Standard Earth (2) are presented. The variance-covariance matrix, or the normal equations, used for correlation analysis, are considered. The format and contents of the matrix, available on magnetic tape, are described and a sample printout is given.

  6. Tracer kinetics of forearm endothelial function: comparison of an empirical method and a quantitative modeling technique.

    PubMed

    Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L

    2007-01-01

    Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.

  7. Stable distribution and long-range correlation of Brent crude oil market

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Jin, Xiu; Huang, Wei-qiang

    2014-11-01

    An empirical study of stable distribution and long-range correlation in Brent crude oil market was presented. First, it is found that the empirical distribution of Brent crude oil returns can be fitted well by a stable distribution, which is significantly different from a normal distribution. Second, the detrended fluctuation analysis for the Brent crude oil returns shows that there are long-range correlation in returns. It implies that there are patterns or trends in returns that persist over time. Third, the detrended fluctuation analysis for the Brent crude oil returns shows that after the financial crisis 2008, the Brent crude oil market becomes more persistence. It implies that the financial crisis 2008 could increase the frequency and strength of the interdependence and correlations between the financial time series. All of these findings may be used to improve the current fractal theories.

  8. On the galaxy-halo connection in the EAGLE simulation

    NASA Astrophysics Data System (ADS)

    Desmond, Harry; Mao, Yao-Yuan; Wechsler, Risa H.; Crain, Robert A.; Schaye, Joop

    2017-10-01

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass-size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy-halo connection it implies. We find the EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.

  9. Rapid correction of electron microprobe data for multicomponent metallic systems

    NASA Technical Reports Server (NTRS)

    Gupta, K. P.; Sivakumar, R.

    1973-01-01

    This paper describes an empirical relation for the correction of electron microprobe data for multicomponent metallic systems. It evaluates the empirical correction parameter, a for each element in a binary alloy system using a modification of Colby's MAGIC III computer program and outlines a simple and quick way of correcting the probe data. This technique has been tested on a number of multicomponent metallic systems and the agreement with the results using theoretical expressions is found to be excellent. Limitations and suitability of this relation are discussed and a model calculation is also presented in the Appendix.

  10. A Simple PB/LIE Free Energy Function Accurately Predicts the Peptide Binding Specificity of the Tiam1 PDZ Domain.

    PubMed

    Panel, Nicolas; Sun, Young Joo; Fuentes, Ernesto J; Simonson, Thomas

    2017-01-01

    PDZ domains generally bind short amino acid sequences at the C-terminus of target proteins, and short peptides can be used as inhibitors or model ligands. Here, we used experimental binding assays and molecular dynamics simulations to characterize 51 complexes involving the Tiam1 PDZ domain and to test the performance of a semi-empirical free energy function. The free energy function combined a Poisson-Boltzmann (PB) continuum electrostatic term, a van der Waals interaction energy, and a surface area term. Each term was empirically weighted, giving a Linear Interaction Energy or "PB/LIE" free energy. The model yielded a mean unsigned deviation of 0.43 kcal/mol and a Pearson correlation of 0.64 between experimental and computed free energies, which was superior to a Null model that assumes all complexes have the same affinity. Analyses of the models support several experimental observations that indicate the orientation of the α 2 helix is a critical determinant for peptide specificity. The models were also used to predict binding free energies for nine new variants, corresponding to point mutants of the Syndecan1 and Caspr4 peptides. The predictions did not reveal improved binding; however, they suggest that an unnatural amino acid could be used to increase protease resistance and peptide lifetimes in vivo . The overall performance of the model should allow its use in the design of new PDZ ligands in the future.

  11. A Simple PB/LIE Free Energy Function Accurately Predicts the Peptide Binding Specificity of the Tiam1 PDZ Domain

    PubMed Central

    Panel, Nicolas; Sun, Young Joo; Fuentes, Ernesto J.; Simonson, Thomas

    2017-01-01

    PDZ domains generally bind short amino acid sequences at the C-terminus of target proteins, and short peptides can be used as inhibitors or model ligands. Here, we used experimental binding assays and molecular dynamics simulations to characterize 51 complexes involving the Tiam1 PDZ domain and to test the performance of a semi-empirical free energy function. The free energy function combined a Poisson-Boltzmann (PB) continuum electrostatic term, a van der Waals interaction energy, and a surface area term. Each term was empirically weighted, giving a Linear Interaction Energy or “PB/LIE” free energy. The model yielded a mean unsigned deviation of 0.43 kcal/mol and a Pearson correlation of 0.64 between experimental and computed free energies, which was superior to a Null model that assumes all complexes have the same affinity. Analyses of the models support several experimental observations that indicate the orientation of the α2 helix is a critical determinant for peptide specificity. The models were also used to predict binding free energies for nine new variants, corresponding to point mutants of the Syndecan1 and Caspr4 peptides. The predictions did not reveal improved binding; however, they suggest that an unnatural amino acid could be used to increase protease resistance and peptide lifetimes in vivo. The overall performance of the model should allow its use in the design of new PDZ ligands in the future. PMID:29018806

  12. Role of Demographic Dynamics and Conflict in the Population-Area Relationship for Human Languages

    PubMed Central

    Manrubia, Susanna C.; Axelsen, Jacob B.; Zanette, Damián H.

    2012-01-01

    Many patterns displayed by the distribution of human linguistic groups are similar to the ecological organization described for biological species. It remains a challenge to identify simple and meaningful processes that describe these patterns. The population size distribution of human linguistic groups, for example, is well fitted by a log-normal distribution that may arise from stochastic demographic processes. As we show in this contribution, the distribution of the area size of home ranges of those groups also agrees with a log-normal function. Further, size and area are significantly correlated: the number of speakers and the area spanned by linguistic groups follow the allometric relation , with an exponent varying accross different world regions. The empirical evidence presented leads to the hypothesis that the distributions of and , and their mutual dependence, rely on demographic dynamics and on the result of conflicts over territory due to group growth. To substantiate this point, we introduce a two-variable stochastic multiplicative model whose analytical solution recovers the empirical observations. Applied to different world regions, the model reveals that the retreat in home range is sublinear with respect to the decrease in population size, and that the population-area exponent grows with the typical strength of conflicts. While the shape of the population size and area distributions, and their allometric relation, seem unavoidable outcomes of demography and inter-group contact, the precise value of could give insight on the cultural organization of those human groups in the last thousand years. PMID:22815726

  13. Livestock Helminths in a Changing Climate: Approaches and Restrictions to Meaningful Predictions

    PubMed Central

    Fox, Naomi J.; Marion, Glenn; Davidson, Ross S.; White, Piran C. L.; Hutchings, Michael R.

    2012-01-01

    Simple Summary Parasitic helminths represent one of the most pervasive challenges to livestock, and their intensity and distribution will be influenced by climate change. There is a need for long-term predictions to identify potential risks and highlight opportunities for control. We explore the approaches to modelling future helminth risk to livestock under climate change. One of the limitations to model creation is the lack of purpose driven data collection. We also conclude that models need to include a broad view of the livestock system to generate meaningful predictions. Abstract Climate change is a driving force for livestock parasite risk. This is especially true for helminths including the nematodes Haemonchus contortus, Teladorsagia circumcincta, Nematodirus battus, and the trematode Fasciola hepatica, since survival and development of free-living stages is chiefly affected by temperature and moisture. The paucity of long term predictions of helminth risk under climate change has driven us to explore optimal modelling approaches and identify current bottlenecks to generating meaningful predictions. We classify approaches as correlative or mechanistic, exploring their strengths and limitations. Climate is one aspect of a complex system and, at the farm level, husbandry has a dominant influence on helminth transmission. Continuing environmental change will necessitate the adoption of mitigation and adaptation strategies in husbandry. Long term predictive models need to have the architecture to incorporate these changes. Ultimately, an optimal modelling approach is likely to combine mechanistic processes and physiological thresholds with correlative bioclimatic modelling, incorporating changes in livestock husbandry and disease control. Irrespective of approach, the principal limitation to parasite predictions is the availability of active surveillance data and empirical data on physiological responses to climate variables. By combining improved empirical data and refined models with a broad view of the livestock system, robust projections of helminth risk can be developed. PMID:26486780

  14. A commentary on perception-action relationships in spatial display instruments

    NASA Technical Reports Server (NTRS)

    Shebilske, Wayne L.

    1989-01-01

    Transfer of information across disciplines is promoted, while basic and applied researchers are cautioned about the danger of assuming simple relationships between stimulus information, perceptual impressions, and performance including pattern recognition and sensorimotor skills. A theoretical and empirical foundation was developed predicting those relationships.

  15. Education and Work

    ERIC Educational Resources Information Center

    Trostel, Philip; Walker, Ian

    2006-01-01

    This paper examines the relationship between the incentives to work and to invest in human capital through education in a lifecycle optimizing model. These incentives are shown to be mutually reinforcing in a simple stylized model. This theoretical prediction is investigated empirically using three large micro datasets covering a broad range of…

  16. Evaporation and transpiration

    Treesearch

    Robert R. Ziemer

    1979-01-01

    For years, the principal objective of evapotranspiration research has been to calculate the loss of water under varying conditions of climate, soil, and vegetation. The early simple empirical methods have generally been replaced by more detailed models which more closely represent the physical and biological processes involved. Monteith's modification of the...

  17. Literature review : simple test method for possible use in predicting the fatigue of asphaltic concrete.

    DOT National Transportation Integrated Search

    1975-01-01

    It has been recognized for many years that fatigue is one of many mechanisms by which asphaltic concrete pavements fail. Experience and empirical design procedures such as those developed by Marshall and Hveem have enabled engineers to design-mixture...

  18. Data mining in forecasting PVT correlations of crude oil systems based on Type1 fuzzy logic inference systems

    NASA Astrophysics Data System (ADS)

    El-Sebakhy, Emad A.

    2009-09-01

    Pressure-volume-temperature properties are very important in the reservoir engineering computations. There are many empirical approaches for predicting various PVT properties based on empirical correlations and statistical regression models. Last decade, researchers utilized neural networks to develop more accurate PVT correlations. These achievements of neural networks open the door to data mining techniques to play a major role in oil and gas industry. Unfortunately, the developed neural networks correlations are often limited, and global correlations are usually less accurate compared to local correlations. Recently, adaptive neuro-fuzzy inference systems have been proposed as a new intelligence framework for both prediction and classification based on fuzzy clustering optimization criterion and ranking. This paper proposes neuro-fuzzy inference systems for estimating PVT properties of crude oil systems. This new framework is an efficient hybrid intelligence machine learning scheme for modeling the kind of uncertainty associated with vagueness and imprecision. We briefly describe the learning steps and the use of the Takagi Sugeno and Kang model and Gustafson-Kessel clustering algorithm with K-detected clusters from the given database. It has featured in a wide range of medical, power control system, and business journals, often with promising results. A comparative study will be carried out to compare their performance of this new framework with the most popular modeling techniques, such as neural networks, nonlinear regression, and the empirical correlations algorithms. The results show that the performance of neuro-fuzzy systems is accurate, reliable, and outperform most of the existing forecasting techniques. Future work can be achieved by using neuro-fuzzy systems for clustering the 3D seismic data, identification of lithofacies types, and other reservoir characterization.

  19. Pleiotropy of cardiometabolic syndrome with obesity-related anthropometric traits determined using empirically derived kinships from the Busselton Health Study.

    PubMed

    Cadby, Gemma; Melton, Phillip E; McCarthy, Nina S; Almeida, Marcio; Williams-Blangero, Sarah; Curran, Joanne E; VandeBerg, John L; Hui, Jennie; Beilby, John; Musk, A W; James, Alan L; Hung, Joseph; Blangero, John; Moses, Eric K

    2018-01-01

    Over two billion adults are overweight or obese and therefore at an increased risk of cardiometabolic syndrome (CMS). Obesity-related anthropometric traits genetically correlated with CMS may provide insight into CMS aetiology. The aim of this study was to utilise an empirically derived genetic relatedness matrix to calculate heritabilities and genetic correlations between CMS and anthropometric traits to determine whether they share genetic risk factors (pleiotropy). We used genome-wide single nucleotide polymorphism (SNP) data on 4671 Busselton Health Study participants. Exploiting both known and unknown relatedness, empirical kinship probabilities were estimated using these SNP data. General linear mixed models implemented in SOLAR were used to estimate narrow-sense heritabilities (h 2 ) and genetic correlations (r g ) between 15 anthropometric and 9 CMS traits. Anthropometric traits were adjusted by body mass index (BMI) to determine whether the observed genetic correlation was independent of obesity. After adjustment for multiple testing, all CMS and anthropometric traits were significantly heritable (h 2 range 0.18-0.57). We identified 50 significant genetic correlations (r g range: - 0.37 to 0.75) between CMS and anthropometric traits. Five genetic correlations remained significant after adjustment for BMI [high density lipoprotein cholesterol (HDL-C) and waist-hip ratio; triglycerides and waist-hip ratio; triglycerides and waist-height ratio; non-HDL-C and waist-height ratio; insulin and iliac skinfold thickness]. This study provides evidence for the presence of potentially pleiotropic genes that affect both anthropometric and CMS traits, independently of obesity.

  20. A simple marriage model for the power-law behaviour in the frequency distributions of family names

    NASA Astrophysics Data System (ADS)

    Wu, Hao-Yun; Chou, Chung-I.; Tseng, Jie-Jun

    2011-01-01

    In many countries, the frequency distributions of family names are found to decay as a power law with an exponent ranging from 1.0 to 2.2. In this work, we propose a simple marriage model which can reproduce this power-law behaviour. Our model, based on the evolution of families, consists of the growth of big families and the formation of new families. Preliminary results from the model show that the name distributions are in good agreement with empirical data from Taiwan and Norway.

  1. [A simple testing installation for the production of aerosols with constant bacteria-contaminated concentrations].

    PubMed

    Herbst, M; Lehmhus, H; Oldenburg, B; Orlowski, C; Ohgke, H

    1983-04-01

    A simple experimental set for the production and investigation of bacterially contaminated solid-state aerosols with constant concentration is described. The experimental set consists mainly of a fluidized bed-particle generator within a modified chamber for formaldehyde desinfection. The special conditions for the production of a defined concentration of particles and microorganisms are to be found out empirically. In a first application aerosol-sizing of an Andersen sampler is investigated. The findings of Andersen (1) are confirmed with respect to our experimental conditions.

  2. Competition in health insurance markets: limitations of current measures for policy analysis.

    PubMed

    Scanlon, Dennis P; Chernew, Michael; Swaminathan, Shailender; Lee, Woolton

    2006-12-01

    Health care reform proposals often rely on increased competition in health insurance markets to drive improved performance in health care costs, access, and quality. We examine a range of data issues related to the measures of health insurance competition used in empirical studies published from 1994-2004. The literature relies exclusively on market structure and penetration variables to measure competition. While these measures are correlated, the degree of correlation is modest, suggesting that choice of measure could influence empirical results. Moreover, certain measurement issues such as the lack of data on PPO enrollment, the treatment of small firms, and omitted market characteristics also could affect the conclusions in empirical studies. Importantly, other types of measures related to competition (e.g., the availability of information on price and outcomes, degree of entry barriers, etc.) are important from both a theoretical and policy perspective, but their impact on market outcomes has not been widely studied.

  3. Experimental investigation of heat transfer coefficient of mini-channel PCHE (printed circuit heat exchanger)

    NASA Astrophysics Data System (ADS)

    Kwon, Dohoon; Jin, Lingxue; Jung, WooSeok; Jeong, Sangkwon

    2018-06-01

    Heat transfer coefficient of a mini-channel printed circuit heat exchanger (PCHE) with counter-flow configuration is investigated. The PCHE used in the experiments is two layered (10 channels per layer) and has the hydraulic diameter of 1.83 mm. Experiments are conducted under various cryogenic heat transfer conditions: single-phase, boiling and condensation heat transfer. Heat transfer coefficients of each experiments are presented and compared with established correlations. In the case of the single-phase experiment, empiricial correlation of modified Dittus-Boelter correlation was proposed, which predicts the experimental results with 5% error at Reynolds number range from 8500 to 17,000. In the case of the boiling experiment, film boiling phenomenon occurred dominantly due to large temperature difference between the hot side and the cold side fluids. Empirical correlation is proposed which predicts experimental results with 20% error at Reynolds number range from 2100 to 2500. In the case of the condensation experiment, empirical correlation of modified Akers correlation was proposed, which predicts experimental results with 10% error at Reynolds number range from 3100 to 6200.

  4. Prediction of friction coefficients for gases

    NASA Technical Reports Server (NTRS)

    Taylor, M. F.

    1969-01-01

    Empirical relations are used for correlating laminar and turbulent friction coefficients for gases, with large variations in the physical properties, flowing through smooth tubes. These relations have been used to correlate friction coefficients for hydrogen, helium, nitrogen, carbon dioxide and air.

  5. Semi-empirical correlation for binary interaction parameters of the Peng-Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor-liquid equilibrium.

    PubMed

    Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O

    2013-03-01

    Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  6. Kolmogorov-Smirnov test for spatially correlated data

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  7. An Empirical Study of the Influence of the Concept of "Job-Hunting" on Graduates' Employment

    ERIC Educational Resources Information Center

    Chen, Chengwen; Hu, Guiying

    2008-01-01

    The concept of job-hunting is an important factor affecting university students' employment. This empirical study shows that while hunting for a job, graduates witness negative correlation between their expectation of the nature of work and the demand for occupational types and the accessibility to a post and monthly income; positive correlation…

  8. Solar-terrestrial predictions proceedings. Volume 4: Prediction of terrestrial effects of solar activity

    NASA Technical Reports Server (NTRS)

    Donnelly, R. E. (Editor)

    1980-01-01

    Papers about prediction of ionospheric and radio propagation conditions based primarily on empirical or statistical relations is discussed. Predictions of sporadic E, spread F, and scintillations generally involve statistical or empirical predictions. The correlation between solar-activity and terrestrial seismic activity and the possible relation between solar activity and biological effects is discussed.

  9. Modeling runoff and microbial overland transport with KINEROS2/STWIR model: Accuracy and uncertainty as affected by source of infiltration parameters

    EPA Science Inventory

    Infiltration is important to modeling the overland transport of microorganisms in environmental waters. In watershed- and hillslope scale-models, infiltration is commonly described by simple equations relating infiltration rate to soil saturated conductivity and by empirical para...

  10. Resin characterization

    Treesearch

    Robert L. Geimer; Robert A. Follensbee; Alfred W. Christiansen; James A. Koutsky; George E. Myers

    1990-01-01

    Currently, thermosetting adhesives are characterized by physical andchemical features such as viscosity, solids content, pH, and molecular distribution, and their reaction in simple gel tests. Synthesis of a new resin for a particular application is usually accompanied by a series of empirical laboratory and plant trials. The purpose of the research outlined in this...

  11. A Deterministic Annealing Approach to Clustering AIRS Data

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  12. Decision Making and Confidence Given Uncertain Advice

    ERIC Educational Resources Information Center

    Lee, Michael D.; Dry, Matthew J.

    2006-01-01

    We study human decision making in a simple forced-choice task that manipulates the frequency and accuracy of available information. Empirically, we find that people make decisions consistent with the advice provided, but that their subjective confidence in their decisions shows 2 interesting properties. First, people's confidence does not depend…

  13. "Molecular Clock" Analogs: A Relative Rates Exercise

    ERIC Educational Resources Information Center

    Wares, John P.

    2008-01-01

    Although molecular clock theory is a commonly discussed facet of evolutionary biology, undergraduates are rarely presented with the underlying information of how this theory is examined relative to empirical data. Here a simple contextual exercise is presented that not only provides insight into molecular clocks, but is also a useful exercise for…

  14. Using Signs to Facilitate Vocabulary in Children with Language Delays

    ERIC Educational Resources Information Center

    Lederer, Susan Hendler; Battaglia, Dana

    2015-01-01

    The purpose of this article is to explore recommended practices in choosing and using key word signs (i.e., simple single-word gestures for communication) to facilitate first spoken words in hearing children with language delays. Developmental, theoretical, and empirical supports for this practice are discussed. Practical recommendations for…

  15. Comparing an annual and daily time-step model for predicting field-scale phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...

  16. A Positive Stigma for Child Labor?

    ERIC Educational Resources Information Center

    Patrinos, Harry Anthony; Shafiq, M. Najeeb

    2008-01-01

    We introduce a simple empirical model that assumes a positive stigma (or norm) towards child labor that is common in some developing countries. We then illustrate our positive stigma model using data from Guatemala. Controlling for several child- and household-level characteristics, we use two instruments for measuring stigma: a child's indigenous…

  17. Three Essays on Estimating Causal Treatment Effects

    ERIC Educational Resources Information Center

    Deutsch, Jonah

    2013-01-01

    This dissertation is composed of three distinct chapters, each of which addresses issues of estimating treatment effects. The first chapter empirically tests the Value-Added (VA) model using school lotteries. The second chapter, co-authored with Michael Wood, considers properties of inverse probability weighting (IPW) in simple treatment effect…

  18. Synthesis and Analysis of Copper Hydroxy Double Salts

    ERIC Educational Resources Information Center

    Brigandi, Laura M.; Leber, Phyllis A.; Yoder, Claude H.

    2005-01-01

    A project involving the synthesis of several naturally occurring copper double salts using simple aqueous conditions is reported. The ions present in the compound are analyzed using colorimetric, gravimetric, and gas-analysis techniques appropriate for the first-year laboratory and from the percent composition, the empirical formula of each…

  19. Language Switching in the Production of Phrases

    ERIC Educational Resources Information Center

    Tarlowski, Andrzej; Wodniecka, Zofia; Marzecova, Anna

    2013-01-01

    The language switching task has provided a useful insight into how bilinguals produce language. So far, however, the studies using this method have been limited to lexical access. The present study provides empirical evidence on language switching in the production of simple grammar structures. In the reported experiment, Polish-English unbalanced…

  20. Measuring the Impact of Education on Productivity. Working Paper #261.

    ERIC Educational Resources Information Center

    Plant, Mark; Welch, Finis

    A theoretical and conceptual analysis of techniques used to measure education's contribution to productivity is followed by a discussion of the empirical measures implemented by various researchers. Standard methods of growth accounting make sense for simple measurement of factor contributions where outputs are well measured and when factor growth…

  1. SIMPLE EMPIRICAL RISK RELATIONSHIPS BETWEEN FISH ASSEMBLAGES, HABITAT AND WATER QUALITY IN OHIO

    EPA Science Inventory

    To assess the condition of its streams, fish, habitat and water quality data were collected from 1980 to 1998 by the Ohio Environmental Protection Agency. These data were sorted into 190 time/locations by basin, river mile and year. Eighteen fish community variables and 24 habi...

  2. Field Theory in Cultural Capital Studies of Educational Attainment

    ERIC Educational Resources Information Center

    Krarup, Troels; Munk, Martin D.

    2016-01-01

    This article argues that there is a double problem in international research in cultural capital and educational attainment: an empirical problem, since few new insights have been gained within recent years; and a theoretical problem, since cultural capital is seen as a simple hypothesis about certain isolated individual resources, disregarding…

  3. Roles of Engineering Correlations in Hypersonic Entry Boundary Layer Transition Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Charles H.; King, Rudolph A.; Kergerise, Michael A.; Berry, Scott A.; Horvath, Thomas J.

    2010-01-01

    Efforts to design and operate hypersonic entry vehicles are constrained by many considerations that involve all aspects of an entry vehicle system. One of the more significant physical phenomenon that affect entry trajectory and thermal protection system design is the occurrence of boundary layer transition from a laminar to turbulent state. During the Space Shuttle Return To Flight activity following the loss of Columbia and her crew of seven, NASA's entry aerothermodynamics community implemented an engineering correlation based framework for the prediction of boundary layer transition on the Orbiter. The methodology for this implementation relies upon the framework of correlation techniques that have been in use for several decades. What makes the Orbiter boundary layer transition correlation implementation unique is that a statistically significant data set was acquired in multiple ground test facilities, flight data exists to assist in establishing a better correlation and the framework was founded upon state of the art chemical nonequilibrium Navier Stokes flow field simulations. The basic tenets that guided the formulation and implementation of the Orbiter Return To Flight boundary layer transition prediction capability will be reviewed as a recommended format for future empirical correlation efforts. The validity of this approach has since been demonstrated by very favorable comparison of recent entry flight testing performed with the Orbiter Discovery, which will be graphically summarized. These flight data can provide a means to validate discrete protuberance engineering correlation approaches as well as high fidelity prediction methods to higher confidence. The results of these Orbiter engineering and flight test activities only serve to reinforce the essential role that engineering correlations currently exercise in the design and operation of entry vehicles. The framework of information-related to the Orbiter empirical boundary layer transition prediction capability will be utilized to establish a fresh perspective on this role, to illustrate how quantitative statistical evaluations of empirical correlations can and should be used to assess accuracy and to discuss what the authors' perceive as a recent heightened interest in the application of high fidelity numerical modeling of boundary layer transition. Concrete results will also be developed related to empirical boundary layer transition onset correlations. This will include assessment of the discrete protuberance boundary layer transition onset data assembled for the Orbiter configuration during post-Columbia Return To Flight. Assessment of these data will conclude that momentum thickness Reynolds number based correlations have superior coefficients and uncertainty in comparison to roughness height based Reynolds numbers, aka Re(sub k) or Re(sub kk). In addition, linear regression results from roughness height Reynolds number based correlations will be evaluated, leading to a hypothesis that non-continuum effects play a role in the processes associated with incipient boundary layer transition on discrete protuberances.

  4. Formulation, Implementation and Validation of a Two-Fluid model in a Fuel Cell CFD Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Kunal; Cole, J. Vernon; Kumar, Sanjiv

    2008-12-01

    Water management is one of the main challenges in PEM Fuel Cells. While water is essential for membrane electrical conductivity, excess liquid water leads to flooding of catalyst layers. Despite the fact that accurate prediction of two-phase transport is key for optimal water management, understanding of the two-phase transport in fuel cells is relatively poor. Wang et. al. have studied the two-phase transport in the channel and diffusion layer separately using a multiphase mixture model. The model fails to accurately predict saturation values for high humidity inlet streams. Nguyen et. al. developed a two-dimensional, two-phase, isothermal, isobaric, steady state modelmore » of the catalyst and gas diffusion layers. The model neglects any liquid in the channel. Djilali et. al. developed a three-dimensional two-phase multicomponent model. The model is an improvement over previous models, but neglects drag between the liquid and the gas phases in the channel. In this work, we present a comprehensive two-fluid model relevant to fuel cells. Models for two-phase transport through Channel, Gas Diffusion Layer (GDL) and Channel-GDL interface, are discussed. In the channel, the gas and liquid pressures are assumed to be same. The surface tension effects in the channel are incorporated using the continuum surface force (CSF) model. The force at the surface is expressed as a volumetric body force and added as a source to the momentum equation. In the GDL, the gas and liquid are assumed to be at different pressures. The difference in the pressures (capillary pressure) is calculated using an empirical correlations. At the Channel-GDL interface, the wall adhesion affects need to be taken into account. SIMPLE-type methods recast the continuity equation into a pressure-correction equation, the solution of which then provides corrections for velocities and pressures. However, in the two-fluid model, the presence of two phasic continuity equations gives more freedom and more complications. A general approach would be to form a mixture continuity equation by linearly combining the phasic continuity equations using appropriate weighting factors. Analogous to mixture equation for pressure correction, a difference equation is used for the volume/phase fraction by taking the difference between the phasic continuity equations. The relative advantages of the above mentioned algorithmic variants for computing pressure correction and volume fractions are discussed and quantitatively assessed. Preliminary model validation is done for each component of the fuel cell. The two-phase transport in the channel is validated using empirical correlations. Transport in the GDL is validated against results obtained from LBM and VOF simulation techniques. The Channel-GDL interface transport will be validated against experiment and empirical correlation of droplet detachment at the interface.« less

  5. Non-Normality and Testing that a Correlation Equals Zero

    ERIC Educational Resources Information Center

    Levy, Kenneth J.

    1977-01-01

    The importance of the assumption of normality for testing that a bivariate normal correlation equals zero is examined. Both empirical and theoretical evidence suggest that such tests are robust with respect to violation of the normality assumption. (Author/JKS)

  6. Empirical source strength correlations for rans-based acoustic analogy methods

    NASA Astrophysics Data System (ADS)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate that there are underlying flaws in JeNo's ability to predict the behavior of a hot jet's acoustic signature at certain rear observer angles, and that this correlation correction is not able to correct these flaws.

  7. Measuring and modeling correlations in multiplex networks.

    PubMed

    Nicosia, Vincenzo; Latora, Vito

    2015-09-01

    The interactions among the elementary components of many complex systems can be qualitatively different. Such systems are therefore naturally described in terms of multiplex or multilayer networks, i.e., networks where each layer stands for a different type of interaction between the same set of nodes. There is today a growing interest in understanding when and why a description in terms of a multiplex network is necessary and more informative than a single-layer projection. Here we contribute to this debate by presenting a comprehensive study of correlations in multiplex networks. Correlations in node properties, especially degree-degree correlations, have been thoroughly studied in single-layer networks. Here we extend this idea to investigate and characterize correlations between the different layers of a multiplex network. Such correlations are intrinsically multiplex, and we first study them empirically by constructing and analyzing several multiplex networks from the real world. In particular, we introduce various measures to characterize correlations in the activity of the nodes and in their degree at the different layers and between activities and degrees. We show that real-world networks exhibit indeed nontrivial multiplex correlations. For instance, we find cases where two layers of the same multiplex network are positively correlated in terms of node degrees, while other two layers are negatively correlated. We then focus on constructing synthetic multiplex networks, proposing a series of models to reproduce the correlations observed empirically and/or to assess their relevance.

  8. Comparison of ACCENT 2000 Shuttle Plume Data with SIMPLE Model Predictions

    NASA Astrophysics Data System (ADS)

    Swaminathan, P. K.; Taylor, J. C.; Ross, M. N.; Zittel, P. F.; Lloyd, S. A.

    2001-12-01

    The JHU/APL Stratospheric IMpact of PLume Effluents (SIMPLE)model was employed to analyze the trace species in situ composition data collected during the ACCENT 2000 intercepts of the space shuttle Space Transportation Launch System (STS) rocket plume as a function of time and radial location within the cold plume. The SIMPLE model is initialized using predictions for species depositions calculated using an afterburning model based on standard TDK/SPP nozzle and SPF plume flowfield codes with an expanded chemical kinetic scheme. The time dependent ambient stratospheric chemistry is fully coupled to the plume species evolution whose transport is based on empirically derived diffusion. Model/data comparisons are encouraging through capturing observed local ozone recovery times as well as overall morphology of chlorine chemistry.

  9. Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction

    NASA Astrophysics Data System (ADS)

    Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele

    2017-09-01

    Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.

  10. Improved RMR Rock Mass Classification Using Artificial Intelligence Algorithms

    NASA Astrophysics Data System (ADS)

    Gholami, Raoof; Rasouli, Vamegh; Alimoradi, Andisheh

    2013-09-01

    Rock mass classification systems such as rock mass rating (RMR) are very reliable means to provide information about the quality of rocks surrounding a structure as well as to propose suitable support systems for unstable regions. Many correlations have been proposed to relate measured quantities such as wave velocity to rock mass classification systems to limit the associated time and cost of conducting the sampling and mechanical tests conventionally used to calculate RMR values. However, these empirical correlations have been found to be unreliable, as they usually overestimate or underestimate the RMR value. The aim of this paper is to compare the results of RMR classification obtained from the use of empirical correlations versus machine-learning methodologies based on artificial intelligence algorithms. The proposed methods were verified based on two case studies located in northern Iran. Relevance vector regression (RVR) and support vector regression (SVR), as two robust machine-learning methodologies, were used to predict the RMR for tunnel host rocks. RMR values already obtained by sampling and site investigation at one tunnel were taken into account as the output of the artificial networks during training and testing phases. The results reveal that use of empirical correlations overestimates the predicted RMR values. RVR and SVR, however, showed more reliable results, and are therefore suggested for use in RMR classification for design purposes of rock structures.

  11. ESTIMATION OF CHEMICAL TOXICITY TO WILDLIFE SPECIES USING INTERSPECIES CORRELATION MODELS

    EPA Science Inventory

    Ecological risks to wildlife are typically assessed using toxicity data for relataively few species and with limited understanding of differences in species sensitivity to contaminants. Empirical interspecies correlation models were derived from LD50 values for 49 wildlife speci...

  12. Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects

    NASA Astrophysics Data System (ADS)

    Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca

    2018-02-01

    Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.

  13. Possible biomechanical origins of the long-range correlations in stride intervals of walking

    NASA Astrophysics Data System (ADS)

    Gates, Deanna H.; Su, Jimmy L.; Dingwell, Jonathan B.

    2007-07-01

    When humans walk, the time duration of each stride varies from one stride to the next. These temporal fluctuations exhibit long-range correlations. It has been suggested that these correlations stem from higher nervous system centers in the brain that control gait cycle timing. Existing proposed models of this phenomenon have focused on neurophysiological mechanisms that might give rise to these long-range correlations, and generally ignored potential alternative mechanical explanations. We hypothesized that a simple mechanical system could also generate similar long-range correlations in stride times. We modified a very simple passive dynamic model of bipedal walking to incorporate forward propulsion through an impulsive force applied to the trailing leg at each push-off. Push-off forces were varied from step to step by incorporating both “sensory” and “motor” noise terms that were regulated by a simple proportional feedback controller. We generated 400 simulations of walking, with different combinations of sensory noise, motor noise, and feedback gain. The stride time data from each simulation were analyzed using detrended fluctuation analysis to compute a scaling exponent, α. This exponent quantified how each stride interval was correlated with previous and subsequent stride intervals over different time scales. For different variations of the noise terms and feedback gain, we obtained short-range correlations (α<0.5), uncorrelated time series (α=0.5), long-range correlations (0.5<α<1.0), or Brownian motion (α>1.0). Our results indicate that a simple biomechanical model of walking can generate long-range correlations and thus perhaps these correlations are not a complex result of higher level neuronal control, as has been previously suggested.

  14. On the Time Evolution of Gamma-Ray Burst Pulses: A Self-Consistent Description.

    PubMed

    Ryde; Svensson

    2000-01-20

    For the first time, the consequences of combining two well-established empirical relations that describe different aspects of the spectral evolution of observed gamma-ray burst (GRB) pulses are explored. These empirical relations are (1) the hardness-intensity correlation and (2) the hardness-photon fluence correlation. From these we find a self-consistent, quantitative, and compact description for the temporal evolution of pulse decay phases within a GRB light curve. In particular, we show that in the case in which the two empirical relations are both valid, the instantaneous photon flux (intensity) must behave as 1&solm0;&parl0;1+t&solm0;tau&parr0;, where tau is a time constant that can be expressed in terms of the parameters of the two empirical relations. The time evolution is fully defined by two initial constants and two parameters. We study a complete sample of 83 bright GRB pulses observed by the Compton Gamma-Ray Observatory and identify a major subgroup of GRB pulses ( approximately 45%) which satisfy the spectral-temporal behavior described above. In particular, the decay phase follows a reciprocal law in time. It is unclear what physics causes such a decay phase.

  15. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  16. Improved estimation of subject-level functional connectivity using full and partial correlation with empirical Bayes shrinkage.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A

    2018-05-15

    Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.

  17. True and apparent scaling: The proximity of the Markov-switching multifractal model to long-range dependence

    NASA Astrophysics Data System (ADS)

    Liu, Ruipeng; Di Matteo, T.; Lux, Thomas

    2007-09-01

    In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.

  18. Re-visions of rationality?

    PubMed

    Newell, Ben R

    2005-01-01

    The appeal of simple algorithms that take account of both the constraints of human cognitive capacity and the structure of environments has been an enduring theme in cognitive science. A novel version of such a boundedly rational perspective views the mind as containing an 'adaptive toolbox' of specialized cognitive heuristics suited to different problems. Although intuitively appealing, when this version was proposed, empirical evidence for the use of such heuristics was scant. I argue that in the light of empirical studies carried out since then, it is time this 'vision of rationality' was revised. An alternative view based on integrative models rather than collections of heuristics is proposed.

  19. An Empirical Research on the Correlation between Human Capital and Career Success of Knowledge Workers in Enterprise

    NASA Astrophysics Data System (ADS)

    Guo, Wenchen; Xiao, Hongjun; Yang, Xi

    Human capital plays an important part in employability of knowledge workers, also it is the important intangible assets of company. This paper explores the correlation between human capital and career success of knowledge workers. Based on literature retrieval, we identified measuring tool of career success and modified further; measuring human capital with self-developed scale of high reliability and validity. After exploratory factor analysis, we suggest that human capital contents four dimensions, including education, work experience, learning ability and training; career success contents three dimensions, including perceived internal competitiveness of organization, perceived external competitiveness of organization and career satisfaction. The result of empirical analysis indicates that there is a positive correlation between human capital and career success, and human capital is an excellent predictor of career success beyond demographics variables.

  20. Empirical Bayes method for reducing false discovery rates of correlation matrices with block diagonal structure.

    PubMed

    Pacini, Clare; Ajioka, James W; Micklem, Gos

    2017-04-12

    Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.

  1. Molecular interactions in nanocellulose assembly

    NASA Astrophysics Data System (ADS)

    Nishiyama, Yoshiharu

    2017-12-01

    The contribution of hydrogen bonds and the London dispersion force in the cohesion of cellulose is discussed in the light of the structure, spectroscopic data, empirical molecular-modelling parameters and thermodynamics data of analogue molecules. The hydrogen bond of cellulose is mainly electrostatic, and the stabilization energy in cellulose for each hydrogen bond is estimated to be between 17 and 30 kJ mol-1. On average, hydroxyl groups of cellulose form hydrogen bonds comparable to those of other simple alcohols. The London dispersion interaction may be estimated from empirical attraction terms in molecular modelling by simple integration over all components. Although this interaction extends to relatively large distances in colloidal systems, the short-range interaction is dominant for the cohesion of cellulose and is equivalent to a compression of 3 GPa. Trends of heat of vaporization of alkyl alcohols and alkanes suggests a stabilization by such hydroxyl group hydrogen bonding to be of the order of 24 kJ mol-1, whereas the London dispersion force contributes about 0.41 kJ mol-1 Da-1. The simple arithmetic sum of the energy is consistent with the experimental enthalpy of sublimation of small sugars, where the main part of the cohesive energy comes from hydrogen bonds. For cellulose, because of the reduced number of hydroxyl groups, the London dispersion force provides the main contribution to intermolecular cohesion. This article is part of a discussion meeting issue `New horizons for cellulose nanotechnology'.

  2. Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.

    PubMed

    King, Leandra; Wakeley, John

    2016-09-01

    We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.

  3. On the galaxy–halo connection in the EAGLE simulation

    DOE PAGES

    Desmond, Harry; Mao, Yao -Yuan; Wechsler, Risa H.; ...

    2017-06-13

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass–size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy–halo connection it implies. We find themore » EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Here by, using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.« less

  4. On the galaxy–halo connection in the EAGLE simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desmond, Harry; Mao, Yao -Yuan; Wechsler, Risa H.

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass–size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy–halo connection it implies. We find themore » EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Here by, using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.« less

  5. Empirical Correlations for the Solubility of Pressurant Gases in Cryogenic Propellants

    NASA Technical Reports Server (NTRS)

    Zimmerli, Gregory A.; Asipauskas, Marius; VanDresar, Neil T.

    2010-01-01

    We have analyzed data published by others reporting the solubility of helium in liquid hydrogen, oxygen, and methane, and of nitrogen in liquid oxygen, to develop empirical correlations for the mole fraction of these pressurant gases in the liquid phase as a function of temperature and pressure. The data, compiled and provided by NIST, are from a variety of sources and covers a large range of liquid temperatures and pressures. The correlations were developed to yield accurate estimates of the mole fraction of the pressurant gas in the cryogenic liquid at temperature and pressures of interest to the propulsion community, yet the correlations developed are applicable over a much wider range. The mole fraction solubility of helium in all these liquids is less than 0.3% at the temperatures and pressures used in propulsion systems. When nitrogen is used as a pressurant for liquid oxygen, substantial contamination can result, though the diffusion into the liquid is slow.

  6. Implication of correlations among some common stability statistics - a Monte Carlo simulations.

    PubMed

    Piepho, H P

    1995-03-01

    Stability analysis of multilocation trials is often based on a mixed two-way model. Two stability measures in frequent use are the environmental variance (S i (2) )and the ecovalence (W i). Under the two-way model the rank orders of the expected values of these two statistics are identical for a given set of genotypes. By contrast, empirical rank correlations among these measures are consistently low. This suggests that the two-way mixed model may not be appropriate for describing real data. To check this hypothesis, a Monte Carlo simulation was conducted. It revealed that the low empirical rank correlation amongS i (2) and W i is most likely due to sampling errors. It is concluded that the observed low rank correlation does not invalidate the two-way model. The paper also discusses tests for homogeneity of S i (2) as well as implications of the two-way model for the classification of stability statistics.

  7. A Simple Syllogism-Solving Test: Empirical Findings and Implications for "g" Research

    ERIC Educational Resources Information Center

    Shikishima, Chizuru; Yamagata, Shinji; Hiraishi, Kai; Sugimoto, Yutaro; Murayama, Kou; Ando, Juko

    2011-01-01

    It has been reported that the ability to solve syllogisms is highly "g"-loaded. In the present study, using a self-administered shortened version of a syllogism-solving test, the "BAROCO Short," we examined whether robust findings generated by previous research regarding IQ scores were also applicable to "BAROCO…

  8. Bayesian Analysis of Longitudinal Data Using Growth Curve Models

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.

    2007-01-01

    Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…

  9. Modeling Smoke Plume-Rise and Dispersion from Southern United States Prescribed Burns with Daysmoke

    Treesearch

    G L Achtemeier; S L Goodrick; Y Liu; F Garcia-Menendez; Y Hu; M. Odman

    2011-01-01

    We present Daysmoke, an empirical-statistical plume rise and dispersion model for simulating smoke from prescribed burns. Prescribed fires are characterized by complex plume structure including multiple-core updrafts which makes modeling with simple plume models difficult. Daysmoke accounts for plume structure in a three-dimensional veering/sheering atmospheric...

  10. Social Trust and the Growth of Schooling

    ERIC Educational Resources Information Center

    Bjornskov, Christian

    2009-01-01

    The paper develops a simple model to examine how social trust might affect the growth of schooling through lowering transaction costs associated with employing educated individuals. In a sample of 52 countries, the paper thereafter provides empirical evidence that trust has led to faster growth of schooling in the period 1960-2000. The findings…

  11. The Polarization of Light and Malus' Law Using Smartphones

    ERIC Educational Resources Information Center

    Monteiro, Martín; Stari, Cecilia; Cabeza, Cecilia; Marti, Arturo C.

    2017-01-01

    Originally an empirical law, nowadays Malus' law is seen as a key experiment to demonstrate the transverse nature of electromagnetic waves, as well as the intrinsic connection between optics and electromagnetism. In this work, a simple and inexpensive setup is proposed to quantitatively verify the nature of polarized light. A flat computer screen…

  12. Another View: In Defense of Vigor over Rigor in Classroom Demonstrations

    ERIC Educational Resources Information Center

    Dunn, Dana S.

    2008-01-01

    Scholarship of teaching and learning (SoTL) demands greater empirical rigor on the part of authors and the editorial process than ever before. Although admirable and important, I worry that this increasing rigor will limit opportunities and outlets for a form of pedagogical vigor--the publication of simple, experiential, but empirically…

  13. A Cognitive Framework in Teaching English Simple Present

    ERIC Educational Resources Information Center

    Tian, Cong

    2015-01-01

    A Cognitive Grammar (CG) analysis of linguistic constructions has been claimed to be beneficial to second language teaching. However, little empirical research has been done to support this claim. In this study, two intact classes of Chinese senior high school students were given a 45-minute review lesson on the usages of the English simple…

  14. Validation of a simple distributed sediment delivery approach in selected sub-basins of the River Inn catchment area

    NASA Astrophysics Data System (ADS)

    Reid, Lucas; Kittlaus, Steffen; Scherer, Ulrike

    2015-04-01

    For large areas without highly detailed data the empirical Universal Soil Loss Equation (USLE) is widely used to quantify soil loss. The problem though is usually the quantification of actual sediment influx into the rivers. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). But it gets difficult with spatially lumped approaches in large catchment areas where the geographical properties have a wide variance. In this study we developed a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in the catchments. The sediment delivery ratio was determined using an empirical approach considering the slope, morphology and land use properties along the flow path as an estimation of travel time of the eroded particles. The model was tested against suspended solids measurements in selected sub-basins of the River Inn catchment area in Germany and Austria, ranging from the high alpine south to the Molasse basin in the northern part.

  15. Oil price and exchange rate co-movements in Asian countries: Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Hussain, Muntazir; Zebende, Gilney Figueira; Bashir, Usman; Donghong, Ding

    2017-01-01

    Most empirical literature investigates the relation between oil prices and exchange rate through different models. These models measure this relationship on two time scales (long and short terms), and often fail to observe the co-movement of these variables at different time scales. We apply a detrended cross-correlation approach (DCCA) to investigate the co-movements of the oil price and exchange rate in 12 Asian countries. This model determines the co-movements of oil price and exchange rate at different time scale. The exchange rate and oil price time series indicate unit root problem. Their correlation and cross-correlation are very difficult to measure. The result becomes spurious when periodic trend or unit root problem occurs in these time series. This approach measures the possible cross-correlation at different time scale and controlling the unit root problem. Our empirical results support the co-movements of oil prices and exchange rate. Our results support a weak negative cross-correlation between oil price and exchange rate for most Asian countries included in our sample. The results have important monetary, fiscal, inflationary, and trade policy implications for these countries.

  16. Demographic, social, and economic effects on Mexican causes of death in 1990.

    PubMed

    Pick, J B; Butler, E W

    1998-01-01

    This study examined spatial geographic patterns of cause of death and 28 demographic and socioeconomic influences on causes of death for 31 Mexican states plus the Federal District for 1990. Mortality data were obtained from the state death registration system and are age standardized. The 28 socioeconomic variables were obtained from Census records. Analysis included 2 submodels: one with all 28 socioeconomic variables in a stepwise regression, and one with each of the 4 groups of factors. The conceptual model is based on epidemiological transition theory and empirical findings. There are 4 stages in mortality decline. Effects are grouped as demographic, sociocultural, economic prosperity, and housing, health, and crime factors. Findings indicate that cancer and cardiovascular disease were strongly correlated and consistently high in border areas as well as the Federal District and Jalisco. Respiratory mortality had higher values in the Federal District, Puebla, and surrounding states, as well as Jalisco. The standardized total mortality rate was only in simple correlations associated inversely with underemployment. All cause specific mortality was associated with individual factors. Respiratory mortality was linked with manufacturing work force. Cardiovascular and cancer mortality were associated with socioeconomic factors. In submodel I, cause specific mortality was predicted by crowding, housing characteristics, marriage and divorce, and manufacturing work force. In submodel II, economic group factors had the strongest model fits explaining 33-60% of the "r" square. Hypothesized effects were only partially validated.

  17. Rate correlation for condensation of pure vapor on turbulent, subcooled liquid

    NASA Technical Reports Server (NTRS)

    Brown, J. Steven; Khoo, Boo Cheong; Sonin, Ain A.

    1990-01-01

    An empirical correlation is presented for the condensation of pure vapor on a subcooled, turbulent liquid with a shear-free interface. The correlation expresses the dependence of the condensation rate on fluid properties, on the liquid-side turbulence (which is imposed from below), and on the effects of buoyancy in the interfacial thermal layer. The correlation is derived from experiments with steam and water, but under conditions which simulate typical cryogenic fluids.

  18. Correlation of published data on the solubility of methane in H/sub 2/O-NaCl solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coco, L.T.; Johnson, A.E. Jr.; Bebout, D.G.

    1981-01-01

    A new correlation of the available published data for the solubility of methane in water was developed, based on fundamental thermodynamic relationships. An empirical relationship for the salting-out coefficient of NaCl for methane solubility in water was determined as a function of temperature. Root mean square and average deviations for the new correlation, the Haas correlation, and the revised Blount equation are compared.

  19. Correlations by the entrainment theory of thermodynamic effects for developed cavitation in venturis and comparisons with ogive data

    NASA Technical Reports Server (NTRS)

    Billet, M. L.; Holl, J. W.; Weir, D. S.

    1975-01-01

    A semi-empirical entrainment theory was employed to correlate the measured temperature depression, Delta T, in a developed cavity for a venturi. This theory correlates Delta t in terms of the dimensionless numbers of Nusselt, Reynolds, Froude, Weber and Peclet, and dimensionless cavity length, L/D. These correlations are then compared with similar correlations for zero and quarter caliber ogives. In addition, cavitation number data for both limited and developed cavitation in venturis are presented.

  20. Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution

    PubMed Central

    Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen

    2014-01-01

    Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804

  1. Empirical microeconomics action functionals

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Du, Xin; Tanputraman, Winson

    2015-06-01

    A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).

  2. An Empirical Investigation of the Proposition that 'School Is Work': A Comparison of Personality-Performance Correlations in School and Work Settings

    ERIC Educational Resources Information Center

    Lounsbury, John W.; Gibson, Lucy W.; Sundstrom, Eric; Wilburn, Denise; Loveland, James M.

    2004-01-01

    An empirical test of Munson and Rubenstein's (1992) assertion that 'school is work' compared a sample of students in a high school with a sample of workers in a manufacturing plant in the same metropolitan area. Data from both samples included scores on six personality traits--Conscientiousness, Agreeableness, Openness, Emotional Stability,…

  3. Internalized Heterosexism: Measurement, Psychosocial Correlates, and Research Directions

    ERIC Educational Resources Information Center

    Szymanski, Dawn M.; Kashubeck-West, Susan; Meyer, Jill

    2008-01-01

    This article provides an integrated critical review of the literature on internalized heterosexism/internalized homophobia (IH), its measurement, and its psychosocial correlates. It describes the psychometric properties of six published measures used to operationalize the construct of IH. It also critically reviews empirical studies on correlates…

  4. Drag and stability characteristics of a variety of reefed and unreefed parachute configurations at Mach 1.80 with an empirical correlation for supersonic Mach numbers

    NASA Technical Reports Server (NTRS)

    Couch, L. M.

    1975-01-01

    An investigation was conducted at Mach 1.80 in the Langley 4-foot supersonic pressure tunnel to determine the effects of variation in reefing ratio and geometric porosity on the drag and stability characteristics of four basic canopy types deployed in the wake of a cone-cylinder forebody. The basic designs included cross, hemisflo, disk-gap-band, and extended-skirt canopies; however, modular cross and standard flat canopies and a ballute were also investigated. An empirical correlation was determined which provides a fair estimation of the drag coefficients in transonic and supersonic flow for parachutes of specified geometric porosity and reefing ratio.

  5. Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model

    PubMed Central

    Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.

    2013-01-01

    One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874

  6. Evaluation of an empirical monitor output estimation in carbon ion radiotherapy.

    PubMed

    Matsumura, Akihiko; Yusa, Ken; Kanai, Tatsuaki; Mizota, Manabu; Ohno, Tatsuya; Nakano, Takashi

    2015-09-01

    A conventional broad beam method is applied to carbon ion radiotherapy at Gunma University Heavy Ion Medical Center. According to this method, accelerated carbon ions are scattered by various beam line devices to form 3D dose distribution. The physical dose per monitor unit (d/MU) at the isocenter, therefore, depends on beam line parameters and should be calibrated by a measurement in clinical practice. This study aims to develop a calculation algorithm for d/MU using beam line parameters. Two major factors, the range shifter dependence and the field aperture effect, are measured via PinPoint chamber in a water phantom, which is an identical setup as that used for monitor calibration in clinical practice. An empirical monitor calibration method based on measurement results is developed using a simple algorithm utilizing a linear function and a double Gaussian pencil beam distribution to express the range shifter dependence and the field aperture effect. The range shifter dependence and the field aperture effect are evaluated to have errors of 0.2% and 0.5%, respectively. The proposed method has successfully estimated d/MU with a difference of less than 1% with respect to the measurement results. Taking the measurement deviation of about 0.3% into account, this result is sufficiently accurate for clinical applications. An empirical procedure to estimate d/MU with a simple algorithm is established in this research. This procedure allows them to use the beam time for more treatments, quality assurances, and other research endeavors.

  7. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  8. EIT Noise Resonance Power Broadening: a probe for coherence dynamics

    NASA Astrophysics Data System (ADS)

    Crescimanno, Michael; O'Leary, Shannon; Snider, Charles

    2012-06-01

    EIT noise correlation spectroscopy holds promise as a simple, robust method for performing high resolution spectroscopy used in devices as diverse as magnetometers and clocks. One useful feature of these noise correlation resonances is that they do not power broaden with the EIT window. We report on measurements of the eventual power broadening (at higher optical powers) of these resonances and a simple, quantitative theoretical model that relates the observed power broadening slope with processes such as two-photon detuning gradients and coherence diffusion. These processes reduce the ground state coherence relative to that of a homogeneous system, and thus the power broadening slope of the EIT noise correlation resonance may be a simple, useful probe for coherence dynamics.

  9. Statistical analysis of co-occurrence patterns in microbial presence-absence datasets

    PubMed Central

    Bewick, Sharon; Thielen, Peter; Mehoke, Thomas; Breitwieser, Florian P.; Paudel, Shishir; Adhikari, Arjun; Wolfe, Joshua; Slud, Eric V.; Karig, David; Fagan, William F.

    2017-01-01

    Drawing on a long history in macroecology, correlation analysis of microbiome datasets is becoming a common practice for identifying relationships or shared ecological niches among bacterial taxa. However, many of the statistical issues that plague such analyses in macroscale communities remain unresolved for microbial communities. Here, we discuss problems in the analysis of microbial species correlations based on presence-absence data. We focus on presence-absence data because this information is more readily obtainable from sequencing studies, especially for whole-genome sequencing, where abundance estimation is still in its infancy. First, we show how Pearson’s correlation coefficient (r) and Jaccard’s index (J)–two of the most common metrics for correlation analysis of presence-absence data–can contradict each other when applied to a typical microbiome dataset. In our dataset, for example, 14% of species-pairs predicted to be significantly correlated by r were not predicted to be significantly correlated using J, while 37.4% of species-pairs predicted to be significantly correlated by J were not predicted to be significantly correlated using r. Mismatch was particularly common among species-pairs with at least one rare species (<10% prevalence), explaining why r and J might differ more strongly in microbiome datasets, where there are large numbers of rare taxa. Indeed 74% of all species-pairs in our study had at least one rare species. Next, we show how Pearson’s correlation coefficient can result in artificial inflation of positive taxon relationships and how this is a particular problem for microbiome studies. We then illustrate how Jaccard’s index of similarity (J) can yield improvements over Pearson’s correlation coefficient. However, the standard null model for Jaccard’s index is flawed, and thus introduces its own set of spurious conclusions. We thus identify a better null model based on a hypergeometric distribution, which appropriately corrects for species prevalence. This model is available from recent statistics literature, and can be used for evaluating the significance of any value of an empirically observed Jaccard’s index. The resulting simple, yet effective method for handling correlation analysis of microbial presence-absence datasets provides a robust means of testing and finding relationships and/or shared environmental responses among microbial taxa. PMID:29145425

  10. Adsorption and co-adsorption of diclofenac and Cu(II) on calcareous soils.

    PubMed

    Graouer-Bacart, Mareen; Sayen, Stéphanie; Guillon, Emmanuel

    2016-02-01

    Pharmaceuticals are emerging contaminants and their presence in different compartments of the environment has been detected in many countries. In this study, laboratory batch experiments were conducted to characterize the adsorption of diclofenac, a widely used non-steroidal anti-inflammatory drug, on six calcareous soils. The adsorption of diclofenac was relatively low, which may lead to a risk of groundwater contamination and plant uptake. A correlation between the soil-water distribution coefficient Kd and soil characteristics has been highlighted. Indeed, diclofenac adsorption as a function of soil organic matter content (% OM) and Rt=% CaCO3/% OM was successfully described through a simple empirical model, indicating the importance of considering the inhibiting effect of CaCO3 on OM retention properties for a better assessment of diclofenac fate in the specific case of calcareous soils. The simultaneous co-adsorption of diclofenac and copper - a ubiquitous pollutant in the environment - at the water/soil interface, was also investigated. It appeared quite unexpectedly that copper did not have a significant influence on diclofenac retention. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Parametric study of beam refraction problems across laser anemometer windows

    NASA Technical Reports Server (NTRS)

    Owen, A. K.

    1986-01-01

    The experimenter is often required to view flows through a window with a different index of refraction than either the medium being observed or the medium that the laser anemometer is immersed in. The refraction that occurs at the window surfaces may lead to undesirable changes in probe volume position or beam crossing angle and can lead to partial or complete beam uncrossing. This report describes the results of a parametric study of this problem using a ray tracing technique to predict these changes. The windows studied were a flat plate and a simple cyclinder. For the flat-plate study: (1) surface thickness, (2) beam crossing angle, (3) bisecting line - surface normal angle, and (4) incoming beam plane surface orientation were varied. For the cylindrical window additional parameters were also varied: (1) probe volume immersion, (2) probe volume off-radial position, and (3) probe volume position out of the R-theta plane of the lens. A number of empirical correlations were deduced to aid the interested reader in determining the movement, uncrossing, and change in crossing angle for a particular situation.

  12. A parametric study of the beam refraction problems across laser anemometer windows

    NASA Technical Reports Server (NTRS)

    Owen, Albert K.

    1986-01-01

    The experimenter is often required to view flows through a window with a different index of refraction than either the medium being observed or the medium that the laser anemometer is immersed in. The refraction that occurs at the window surfaces may lead to undesirable changes in probe volume position or beam crossing angle and can lead to partial or complete beam uncrossing. This report describes the results of a parametric study of this problem using a ray tracing technique to predict these changes. The windows studied were a flat plate and a simple cylinder. For the flat-plate study: (1) surface thickness, (2) beam crossing angle, (3) bisecting line - surface normal angle, and (4) incoming beam plane surface orientation were varied. For the cylindrical window additional parameters were also varied: (1) probe volume immersion, (2) probe volume off-radial position, and (3) probe volume position out of the r-theta plane of the lens. A number of empirical correlations were deduced to aid the reader in determining the movement, uncrossing, and change in crossing angle for a particular situations.

  13. Benford analysis of quantum critical phenomena: First digit provides high finite-size scaling exponent while first two and further are not much better

    NASA Astrophysics Data System (ADS)

    Bera, Anindita; Mishra, Utkarsh; Singha Roy, Sudipto; Biswas, Anindya; Sen(De), Aditi; Sen, Ujjwal

    2018-06-01

    Benford's law is an empirical edict stating that the lower digits appear more often than higher ones as the first few significant digits in statistics of natural phenomena and mathematical tables. A marked proportion of such analyses is restricted to the first significant digit. We employ violation of Benford's law, up to the first four significant digits, for investigating magnetization and correlation data of paradigmatic quantum many-body systems to detect cooperative phenomena, focusing on the finite-size scaling exponents thereof. We find that for the transverse field quantum XY model, behavior of the very first significant digit of an observable, at an arbitrary point of the parameter space, is enough to capture the quantum phase transition in the model with a relatively high scaling exponent. A higher number of significant digits do not provide an appreciable further advantage, in particular, in terms of an increase in scaling exponents. Since the first significant digit of a physical quantity is relatively simple to obtain in experiments, the results have potential implications for laboratory observations in noisy environments.

  14. Human figure drawings in the evaluation of severe adolescent suicidal behavior.

    PubMed

    Zalsman, G; Netanel, R; Fischel, T; Freudenstein, O; Landau, E; Orbach, I; Weizman, A; Pfeffer, C R; Apter, A

    2000-08-01

    To evaluate the reliability of using certain indicators derived from human figure drawings to distinguish between suicidal and nonsuicidal adolescents. Ninety consecutive admissions to an adolescent inpatient unit were assessed. Thirty-nine patients were admitted because of suicidal behavior and 51 for other reasons. All subjects were given the Human Figure Drawing (HFD) test. HFD was evaluated according to the method of Pfeffer and Richman, and the degree of suicidal behavior was rated by the Child Suicide Potential Scale. The internal reliability was satisfactory. HFD indicators correlated significantly with quantitative measures of suicidal behavior; of these indicators specifically, overall impression of the evaluator enabled the prediction of suicidal behavior and the distinction between suicidal and nonsuicidal inpatients (p < .001). A group of graphic indicators derived from a discriminant analysis formed a function, which was able to identify 84.6% of the suicidal and 76.6% of the nonsuicidal adolescents correctly. Many of the items had a regressive quality. The HFD is an example of a simple projective test that may have empirical reliability. It may be useful for the assessment of severe suicidal behavior in adolescents.

  15. Unwinding the hairball graph: Pruning algorithms for weighted complex networks

    NASA Astrophysics Data System (ADS)

    Dianati, Navid

    2016-01-01

    Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.

  16. Numerical study of blast characteristics from detonation of homogeneous explosives

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Kaushik; Genin, Franklin; Nance, Doug V.; Menon, Suresh

    2010-04-01

    A new robust numerical methodology is used to investigate the propagation of blast waves from homogeneous explosives. The gas-phase governing equations are solved using a hybrid solver that combines a higher-order shock capturing scheme with a low-dissipation central scheme. Explosives of interest include Nitromethane, Trinitrotoluene, and High-Melting Explosive. The shock overpressure and total impulse are estimated at different radial locations and compared for the different explosives. An empirical scaling correlation is presented for the shock overpressure, incident positive phase pressure impulse, and total impulse. The role of hydrodynamic instabilities to the blast effects of explosives is also investigated in three dimensions, and significant mixing between the detonation products and air is observed. This mixing results in afterburn, which is found to augment the impulse characteristics of explosives. Furthermore, the impulse characteristics are also observed to be three-dimensional in the region of the mixing layer. This paper highlights that while some blast features can be successfully predicted from simple one-dimensional studies, the growth of hydrodynamic instabilities and the impulsive loading of homogeneous explosives require robust three-dimensional investigation.

  17. Minimum-dissipation scalar transport model for large-eddy simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Abkar, Mahdi; Bae, Hyun J.; Moin, Parviz

    2016-08-01

    Minimum-dissipation models are a simple alternative to the Smagorinsky-type approaches to parametrize the subfilter turbulent fluxes in large-eddy simulation. A recently derived model of this type for subfilter stress tensor is the anisotropic minimum-dissipation (AMD) model [Rozema et al., Phys. Fluids 27, 085107 (2015), 10.1063/1.4928700], which has many desirable properties. It is more cost effective than the dynamic Smagorinsky model, it appropriately switches off in laminar and transitional flows, and it is consistent with the exact subfilter stress tensor on both isotropic and anisotropic grids. In this study, an extension of this approach to modeling the subfilter scalar flux is proposed. The performance of the AMD model is tested in the simulation of a high-Reynolds-number rough-wall boundary-layer flow with a constant and uniform surface scalar flux. The simulation results obtained from the AMD model show good agreement with well-established empirical correlations and theoretical predictions of the resolved flow statistics. In particular, the AMD model is capable of accurately predicting the expected surface-layer similarity profiles and power spectra for both velocity and scalar concentration.

  18. Dependence of two-proton radioactivity on nuclear pairing models

    NASA Astrophysics Data System (ADS)

    Oishi, Tomohiro; Kortelainen, Markus; Pastore, Alessandro

    2017-10-01

    Sensitivity of two-proton emitting decay to nuclear pairing correlation is discussed within a time-dependent three-body model. We focus on the 6Be nucleus assuming α +p +p configuration, and its decay process is described as a time evolution of the three-body resonance state. For a proton-proton subsystem, a schematic density-dependent contact (SDDC) pairing model is employed. From the time-dependent calculation, we observed the exponential decay rule of a two-proton emission. It is shown that the density dependence does not play a major role in determining the decay width, which can be controlled only by the asymptotic strength of the pairing interaction. This asymptotic pairing sensitivity can be understood in terms of the dynamics of the wave function driven by the three-body Hamiltonian, by monitoring the time-dependent density distribution. With this simple SDDC pairing model, there remains an impossible trinity problem: it cannot simultaneously reproduce the empirical Q value, decay width, and the nucleon-nucleon scattering length. This problem suggests that a further sophistication of the theoretical pairing model is necessary, utilizing the two-proton radioactivity data as the reference quantities.

  19. Changes in the Molar Ellipticities of HEWL Observed by Circular Dichroism and Quantitated by Time Resolved Fluorescence Anisotropy Under Crystallizing Conditions

    NASA Technical Reports Server (NTRS)

    Sumida, John

    2002-01-01

    Fluid models for simple colloids predict that as the protein concentration is increased, crystallization should occur at some sufficiently high concentration regardless of the strength of attraction. However, empirical measurements do not fully support this assertion. Measurements of the second virial coefficient (B22) indicate that protein crystallization occurs only over a discrete range of solution parameters. Furthermore, observations of a strong correlation between protein solubility and B22, has led to an ongoing debate regarding the relationship between the two. Experimental work in our lab, using Hen Egg White Lysozyme (HEWL), previously revealed that the rotational anisotropy of the protein under crystallizing conditions changes systematically with pH, ionic strength and temperature. These observations are now supported by recent work revealing that small changes in the molar ellipticity also occur systematically with changes in ionic strength and temperature. This work demonstrates that under crystallization conditions, the protein native state is characterized by a conformational heterogeneity that may prove fundamental to the relationship between protein crystallization and protein solubility.

  20. Autonomous change of behavior for environmental context: An intermittent search model with misunderstanding search pattern

    NASA Astrophysics Data System (ADS)

    Murakami, Hisashi; Gunji, Yukio-Pegio

    2017-07-01

    Although foraging patterns have long been predicted to optimally adapt to environmental conditions, empirical evidence has been found in recent years. This evidence suggests that the search strategy of animals is open to change so that animals can flexibly respond to their environment. In this study, we began with a simple computational model that possesses the principal features of an intermittent strategy, i.e., careful local searches separated by longer steps, as a mechanism for relocation, where an agent in the model follows a rule to switch between two phases, but it could misunderstand this rule, i.e., the agent follows an ambiguous switching rule. Thanks to this ambiguity, the agent's foraging strategy can continuously change. First, we demonstrate that our model can exhibit an optimal change of strategy from Brownian-type to Lévy-type depending on the prey density, and we investigate the distribution of time intervals for switching between the phases. Moreover, we show that the model can display higher search efficiency than a correlated random walk.

  1. COUSCOus: improved protein contact prediction using an empirical Bayes covariance estimator.

    PubMed

    Rawi, Reda; Mall, Raghvendra; Kunji, Khalid; El Anbari, Mohammed; Aupetit, Michael; Ullah, Ehsan; Bensmail, Halima

    2016-12-15

    The post-genomic era with its wealth of sequences gave rise to a broad range of protein residue-residue contact detecting methods. Although various coevolution methods such as PSICOV, DCA and plmDCA provide correct contact predictions, they do not completely overlap. Hence, new approaches and improvements of existing methods are needed to motivate further development and progress in the field. We present a new contact detecting method, COUSCOus, by combining the best shrinkage approach, the empirical Bayes covariance estimator and GLasso. Using the original PSICOV benchmark dataset, COUSCOus achieves mean accuracies of 0.74, 0.62 and 0.55 for the top L/10 predicted long, medium and short range contacts, respectively. In addition, COUSCOus attains mean areas under the precision-recall curves of 0.25, 0.29 and 0.30 for long, medium and short contacts and outperforms PSICOV. We also observed that COUSCOus outperforms PSICOV w.r.t. Matthew's correlation coefficient criterion on full list of residue contacts. Furthermore, COUSCOus achieves on average 10% more gain in prediction accuracy compared to PSICOV on an independent test set composed of CASP11 protein targets. Finally, we showed that when using a simple random forest meta-classifier, by combining contact detecting techniques and sequence derived features, PSICOV predictions should be replaced by the more accurate COUSCOus predictions. We conclude that the consideration of superior covariance shrinkage approaches will boost several research fields that apply the GLasso procedure, amongst the presented one of residue-residue contact prediction as well as fields such as gene network reconstruction.

  2. Bernoulli-Langevin Wind Speed Model for Simulation of Storm Events

    NASA Astrophysics Data System (ADS)

    Fürstenau, Norbert; Mittendorf, Monika

    2016-12-01

    We present a simple nonlinear dynamics Langevin model for predicting the instationary wind speed profile during storm events typically accompanying extreme low-pressure situations. It is based on a second-degree Bernoulli equation with δ-correlated Gaussian noise and may complement stationary stochastic wind models. Transition between increasing and decreasing wind speed and (quasi) stationary normal wind and storm states are induced by the sign change of the controlling time-dependent rate parameter k(t). This approach corresponds to the simplified nonlinear laser dynamics for the incoherent to coherent transition of light emission that can be understood by a phase transition analogy within equilibrium thermodynamics [H. Haken, Synergetics, 3rd ed., Springer, Berlin, Heidelberg, New York 1983/2004.]. Evidence for the nonlinear dynamics two-state approach is generated by fitting of two historical wind speed profiles (low-pressure situations "Xaver" and "Christian", 2013) taken from Meteorological Terminal Air Report weather data, with a logistic approximation (i.e. constant rate coefficients k) to the solution of our dynamical model using a sum of sigmoid functions. The analytical solution of our dynamical two-state Bernoulli equation as obtained with a sinusoidal rate ansatz k(t) of period T (=storm duration) exhibits reasonable agreement with the logistic fit to the empirical data. Noise parameter estimates of speed fluctuations are derived from empirical fit residuals and by means of a stationary solution of the corresponding Fokker-Planck equation. Numerical simulations with the Bernoulli-Langevin equation demonstrate the potential for stochastic wind speed profile modeling and predictive filtering under extreme storm events that is suggested for applications in anticipative air traffic management.

  3. A Prototype Physical Database for Passive Microwave Retrievals of Precipitation over the US Southern Great Plains

    NASA Technical Reports Server (NTRS)

    Ringerud, S.; Kummerow, C. D.; Peters-Lidard, C. D.

    2015-01-01

    An accurate understanding of the instantaneous, dynamic land surface emissivity is necessary for a physically based, multi-channel passive microwave precipitation retrieval scheme over land. In an effort to assess the feasibility of the physical approach for land surfaces, a semi-empirical emissivity model is applied for calculation of the surface component in a test area of the US Southern Great Plains. A physical emissivity model, using land surface model data as input, is used to calculate emissivity at the 10GHz frequency, combining contributions from the underlying soil and vegetation layers, including the dielectric and roughness effects of each medium. An empirical technique is then applied, based upon a robust set of observed channel covariances, extending the emissivity calculations to all channels. For calculation of the hydrometeor contribution, reflectivity profiles from the Tropical Rainfall Measurement Mission Precipitation Radar (TRMM PR) are utilized along with coincident brightness temperatures (Tbs) from the TRMM Microwave Imager (TMI), and cloud-resolving model profiles. Ice profiles are modified to be consistent with the higher frequency microwave Tbs. Resulting modeled top of the atmosphere Tbs show correlations to observations of 0.9, biases of 1K or less, root-mean-square errors on the order of 5K, and improved agreement over the use of climatological emissivity values. The synthesis of these models and data sets leads to the creation of a simple prototype Tb database that includes both dynamic surface and atmospheric information physically consistent with the land surface model, emissivity model, and atmospheric information.

  4. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-05

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.

  5. Helicity and nuclear β decay correlations

    NASA Astrophysics Data System (ADS)

    Hong, Ran; Sternberg, Matthew G.; Garcia, Alejandro

    2017-01-01

    We present simple derivations of nuclear β-decay correlations with an emphasis on the special role of helicity. This topic provides a good opportunity to teach students about helicity and chirality in particle physics with exercises that use simple aspects of quantum mechanics. In addition, this paper serves as an introduction to nuclear β-decay correlations from both a theoretical and experimental perspective. This article can be used to introduce students to ongoing experiments searching for hints of new physics in the low-energy precision frontier.

  6. Multifractality, efficiency analysis of Chinese stock market and its cross-correlation with WTI crude oil price

    NASA Astrophysics Data System (ADS)

    Zhuang, Xiaoyang; Wei, Yu; Ma, Feng

    2015-07-01

    In this paper, the multifractality and efficiency degrees of ten important Chinese sectoral indices are evaluated using the methods of MF-DFA and generalized Hurst exponents. The study also scrutinizes the dynamics of the efficiency of Chinese sectoral stock market by the rolling window approach. The overall empirical findings revealed that all the sectoral indices of Chinese stock market exist different degrees of multifractality. The results of different efficiency measures have agreed on that the 300 Materials index is the least efficient index. However, they have a slight diffidence on the most efficient one. The 300 Information Technology, 300 Telecommunication Services and 300 Health Care indices are comparatively efficient. We also investigate the cross-correlations between the ten sectoral indices and WTI crude oil price based on Multifractal Detrended Cross-correlation Analysis. At last, some relevant discussions and implications of the empirical results are presented.

  7. Droplet breakup in accelerating gas flows. Part 2: Secondary atomization

    NASA Technical Reports Server (NTRS)

    Zajac, L. J.

    1973-01-01

    An experimental investigation to determine the effects of an accelerating gas flow on the atomization characteristics of liquid sprays was conducted. The sprays were produced by impinging two liquid jets. The liquid was molten wax and the gas was nitrogen. The use of molten wax allowed for a quantitative measure of the resulting dropsize distribution. The results of this study, indicate that a significant amount of droplet breakup will occur as a result of the action of the gas on the liquid droplets. Empirical correlations are presented in terms of parameters that were found to affect the mass median dropsize most significantly, the orifice diameter, the liquid injection velocity, and the maximum gas velocity. An empirical correlation for the normalized dropsize distribution is also presented. These correlations are in a form that may be incorporated readily into existing combustion model computer codes for the purpose of calculating rocket engine combustion performance.

  8. Analysis of Vibration and Noise of Construction Machinery Based on Ensemble Empirical Mode Decomposition and Spectral Correlation Analysis Method

    NASA Astrophysics Data System (ADS)

    Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan

    In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.

  9. A simple empirical model for the clarification-thickening process in wastewater treatment plants.

    PubMed

    Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T

    2015-01-01

    In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p < 0.001). The model requires application of only one parameter, the sludge volume index (SVI), which is readily incorporated into routine analysis. By combining this model with the conservation of mass equation, an empirical model for compression settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.

  10. An Empirical Examination of the Anomie Theory of Drug Use.

    ERIC Educational Resources Information Center

    Dull, R. Thomas

    1983-01-01

    Investigated the relationship between anomie theory, as measured by Srole's Anomie Scale, and self-admitted drug use in an adult population (N=1,449). Bivariate cross-comparison correlations indicated anomie was significantly correlated with several drug variables, but these associations were extremely weak and of little explanatory value.…

  11. Physical Activity and Psychological Correlates during an After-School Running Club

    ERIC Educational Resources Information Center

    Kahan, David; McKenzie, Thomas L.

    2018-01-01

    Background: After-school programs (ASPs) have the potential to contribute to moderate-to-vigorous physical activity (MVPA), but there is limited empirical evidence to guide their development and implementation. Purpose: This study assessed the replication of an elementary school running program and identified psychological correlates of children's…

  12. Combining DSMC Simulations and ROSINA/COPS Data of Comet 67P/Churyumov-Gerasimenko to Develop a Realistic Empirical Coma Model and to Determine Accurate Production Rates

    NASA Astrophysics Data System (ADS)

    Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.

    2015-12-01

    We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).

  13. [Is it possible a bioethics based on the experimental evidence?].

    PubMed

    Pastor, Luis Miguel

    2013-01-01

    For years there are different types of criticism about principialist bioethics. One alternative that has been proposed is to introduce empirical evidence within the bioethical discourse to make it less formal, less theoretical and closer to reality. In this paper we analyze first in synthetic form diverse alternative proposals to make an empirical bioethics. Some of them are strongly naturalistic while others aim to provide empirical data only for correct or improve bioethical work. Most of them are not shown in favor of maintaining a complete separation between facts and values, between what is and what ought to be. With different nuances these proposals of moderate naturalism make ethical judgments depend normative social opinion resulting into a certain social naturalism. Against these proposals we think to make a bioethics in that relates the empirical facts with ethical duties, we must rediscover empirical reality of human action. Only from it and, in particular, from the activity of discernment that makes practical reason, when judged on the object of his action, it is possible to integrate the mere descriptive facts with ethical judgments of character prescriptive. In conclusion we think that it is not possible to perform bioethics a mode of empirical science, as this would be contrary to natural reason, leading to a sort of scientific reductionism. At the same time we believe that empirical data are important in the development of bioethics and to enhance and improve the innate ability of human reason to discern good. From this discernment could develop a bioethics from the perspective of ethical agents themselves, avoiding the extremes of an excessive normative rationalism, accepting empirical data and not falling into a simple pragmatism.

  14. Confirmatory factor analysis of the Child Oral Health Impact Profile (Korean version).

    PubMed

    Cho, Young Il; Lee, Soonmook; Patton, Lauren L; Kim, Hae-Young

    2016-04-01

    Empirical support for the factor structure of the Child Oral Health Impact Profile (COHIP) has not been fully established. The purposes of this study were to evaluate the factor structure of the Korean version of the COHIP (COHIP-K) empirically using confirmatory factor analysis (CFA) based on the theoretical framework and then to assess whether any of the factors in the structure could be grouped into a simpler single second-order factor. Data were collected through self-reported COHIP-K responses from a representative community sample of 2,236 Korean children, 8-15 yr of age. Because a large inter-factor correlation of 0.92 was estimated in the original five-factor structure, the two strongly correlated factors were combined into one factor, resulting in a four-factor structure. The revised four-factor model showed a reasonable fit with appropriate inter-factor correlations. Additionally, the second-order model with four sub-factors was reasonable with sufficient fit and showed equal fit to the revised four-factor model. A cross-validation procedure confirmed the appropriateness of the findings. Our analysis empirically supported a four-factor structure of COHIP-K, a summarized second-order model, and the use of an integrated summary COHIP score. © 2016 Eur J Oral Sci.

  15. Role of local network oscillations in resting-state functional connectivity.

    PubMed

    Cabral, Joana; Hugues, Etienne; Sporns, Olaf; Deco, Gustavo

    2011-07-01

    Spatio-temporally organized low-frequency fluctuations (<0.1 Hz), observed in BOLD fMRI signal during rest, suggest the existence of underlying network dynamics that emerge spontaneously from intrinsic brain processes. Furthermore, significant correlations between distinct anatomical regions-or functional connectivity (FC)-have led to the identification of several widely distributed resting-state networks (RSNs). This slow dynamics seems to be highly structured by anatomical connectivity but the mechanism behind it and its relationship with neural activity, particularly in the gamma frequency range, remains largely unknown. Indeed, direct measurements of neuronal activity have revealed similar large-scale correlations, particularly in slow power fluctuations of local field potential gamma frequency range oscillations. To address these questions, we investigated neural dynamics in a large-scale model of the human brain's neural activity. A key ingredient of the model was a structural brain network defined by empirically derived long-range brain connectivity together with the corresponding conduction delays. A neural population, assumed to spontaneously oscillate in the gamma frequency range, was placed at each network node. When these oscillatory units are integrated in the network, they behave as weakly coupled oscillators. The time-delayed interaction between nodes is described by the Kuramoto model of phase oscillators, a biologically-based model of coupled oscillatory systems. For a realistic setting of axonal conduction speed, we show that time-delayed network interaction leads to the emergence of slow neural activity fluctuations, whose patterns correlate significantly with the empirically measured FC. The best agreement of the simulated FC with the empirically measured FC is found for a set of parameters where subsets of nodes tend to synchronize although the network is not globally synchronized. Inside such clusters, the simulated BOLD signal between nodes is found to be correlated, instantiating the empirically observed RSNs. Between clusters, patterns of positive and negative correlations are observed, as described in experimental studies. These results are found to be robust with respect to a biologically plausible range of model parameters. In conclusion, our model suggests how resting-state neural activity can originate from the interplay between the local neural dynamics and the large-scale structure of the brain. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  17. The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement

    ERIC Educational Resources Information Center

    Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.

    2012-01-01

    This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…

  18. Working Memory and Intelligence Are Highly Related Constructs, but Why?

    ERIC Educational Resources Information Center

    Colom, Roberto; Abad, Francisco J.; Quiroga, M. Angeles; Shih, Pei Chun; Flores-Mendoza, Carmen

    2008-01-01

    Working memory and the general factor of intelligence (g) are highly related constructs. However, we still don't know why. Some models support the central role of simple short-term storage, whereas others appeal to executive functions like the control of attention. Nevertheless, the available empirical evidence does not suffice to get an answer,…

  19. Writing, Emotion, and the Brain: What Graduate School Taught Me about Healing.

    ERIC Educational Resources Information Center

    Brand, Alice G.

    The trajectory of an English professor's scholarly interests has always involved emotion. From the simple question she asked herself during her graduate study (what do we feel?), she moved to the beneficial psychological effects of writing, then onto empirically identifying the emotions involved in writing, to discussions of social emotions, to an…

  20. Clausius-Clapeyron Equation and Saturation Vapour Pressure: Simple Theory Reconciled with Practice

    ERIC Educational Resources Information Center

    Koutsoyiannis, Demetris

    2012-01-01

    While the Clausius-Clapeyron equation is very important as it determines the saturation vapour pressure, in practice it is replaced by empirical, typically Magnus-type, equations which are more accurate. It is shown that the reduced accuracy reflects an inconsistent assumption that the latent heat of vaporization is constant. Not only is this…

  1. Stress Management Strategies of Secondary School Teachers in Nigeria. Short Report

    ERIC Educational Resources Information Center

    Arikewuyo, M. Olalekan

    2004-01-01

    The study provides empirical evidence for the management of stress by teachers of secondary schools in Nigeria. A total of 3466 teachers, drawn from secondary schools in Ogun State of Nigeria, returned their questionnaire for the study. Data were analysed using simple percentage and chi-square. The findings indicate that teachers frequently use…

  2. A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses

    USDA-ARS?s Scientific Manuscript database

    Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...

  3. Fire spread characteristics determined in the laboratory

    Treesearch

    Richard C. Rothermel; Hal E. Anderson

    1966-01-01

    Fuel beds of ponderosa pine needles and white pine needles were burned under controlled environmental conditions to determine the effects of fuel moisture and windspeed upon the rate of fire spread. Empirical formulas are presented to show the effect of these parameters. A discussion of rate of spread and some simple experiments show how fuel may be preheated before...

  4. Electronic Structure in Pi Systems: Part I. Huckel Theory with Electron Repulsion.

    ERIC Educational Resources Information Center

    Fox, Marye Anne; Matsen, F. A.

    1985-01-01

    Pi-CI theory is a simple, semi-empirical procedure which (like Huckel theory) treats pi and pseudo-pi orbitals; in addition, electron repulsion is explicitly included and molecular configurations are mixed. Results obtained from application of pi-CI to ethylene are superior to either the Huckel molecular orbital or valence bond theories. (JN)

  5. Representative equations for the thermodynamic and transport properties of fluids near the gas-liquid critical point

    NASA Technical Reports Server (NTRS)

    Sengers, J. V.; Basu, R. S.; Sengers, J. M. H. L.

    1981-01-01

    A survey is presented of representative equations for various thermophysical properties of fluids in the critical region. Representative equations for the transport properties are included. Semi-empirical modifications of the theoretically predicted asymtotic critical behavior that yield simple and practical representations of the fluid properties in the critical region are emphasized.

  6. Predicting the Total Abundance of Resident Salmonids within the Willamette River Basin, Oregon - a Macroecological Modeling Approach

    EPA Science Inventory

    I present a simple, macroecological model of fish abundance that was used to estimate the total number of non-migratory salmonids within the Willamette River Basin (western Oregon). The model begins with empirical point estimates of net primary production (NPP in g C/m2) in fore...

  7. An Empirical Method for Determining 234U Percentage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miko, David K.

    2015-11-02

    When isotopic information for uranium is provided, the concentration of 234U is frequently neglected. Often the isotopic content is given as a percentage of 235U with the assumption that the remainder consists of 238U. In certain applications, such as heat output, the concentration of 234U can be a significant contributing factor. For situations where only the 235U and 238U values are given, a simple way to calculate the 234U component would be beneficial. The approach taken here is empirical. A series of uranium standards with varying enrichments were analyzed. The 234U and 235U data were fit using a second ordermore » polynomial.« less

  8. [Mobbing: a meta-analysis and integrative model of its antecedents and consequences].

    PubMed

    Topa Cantisano, Gabriela; Depolo, Marco; Morales Domínguez, J Francisco

    2007-02-01

    Although mobbing has been extensively studied, empirical research has not led to firm conclusions regarding its antecedents and consequences, both at personal and organizational levels. An extensive literature search yielded 86 empirical studies with 93 samples. The matrix correlation obtained through meta-analytic techniques was used to test a structural equation model. Results supported hypotheses regarding organizational environmental factors as main predictors of mobbing.

  9. Neural correlates of the difference between working memory speed and simple sensorimotor speed: an fMRI study.

    PubMed

    Takeuchi, Hikaru; Sugiura, Motoaki; Sassa, Yuko; Sekiguchi, Atsushi; Yomogida, Yukihito; Taki, Yasuyuki; Kawashima, Ryuta

    2012-01-01

    The difference between the speed of simple cognitive processes and the speed of complex cognitive processes has various psychological correlates. However, the neural correlates of this difference have not yet been investigated. In this study, we focused on working memory (WM) for typical complex cognitive processes. Functional magnetic resonance imaging data were acquired during the performance of an N-back task, which is a measure of WM for typical complex cognitive processes. In our N-back task, task speed and memory load were varied to identify the neural correlates responsible for the difference between the speed of simple cognitive processes (estimated from the 0-back task) and the speed of WM. Our findings showed that this difference was characterized by the increased activation in the right dorsolateral prefrontal cortex (DLPFC) and the increased functional interaction between the right DLPFC and right superior parietal lobe. Furthermore, the local gray matter volume of the right DLPFC was correlated with participants' accuracy during fast WM tasks, which in turn correlated with a psychometric measure of participants' intelligence. Our findings indicate that the right DLPFC and its related network are responsible for the execution of the fast cognitive processes involved in WM. Identified neural bases may underlie the psychometric differences between the speed with which subjects perform simple cognitive tasks and the speed with which subjects perform more complex cognitive tasks, and explain the previous traditional psychological findings.

  10. Complex Sentence Comprehension and Working Memory in Children With Specific Language Impairment

    PubMed Central

    Montgomery, James W.; Evans, Julia L.

    2015-01-01

    Purpose This study investigated the association of 2 mechanisms of working memory (phonological short-term memory [PSTM], attentional resource capacity/allocation) with the sentence comprehension of school-age children with specific language impairment (SLI) and 2 groups of control children. Method Twenty-four children with SLI, 18 age-matched (CA) children, and 16 language- and memory-matched (LMM) children completed a nonword repetition task (PSTM), the competing language processing task (CLPT; resource capacity/allocation), and a sentence comprehension task comprising complex and simple sentences. Results (1) The SLI group performed worse than the CA group on each memory task; (2) all 3 groups showed comparable simple sentence comprehension, but for complex sentences, the SLI and LMM groups performed worse than the CA group; (3) for the SLI group, (a) CLPT correlated with complex sentence comprehension, and (b) nonword repetition correlated with simple sentence comprehension; (4) for CA children, neither memory variable correlated with either sentence type; and (5) for LMM children, only CLPT correlated with complex sentences. Conclusions Comprehension of both complex and simple grammar by school-age children with SLI is a mentally demanding activity, requiring significant working memory resources. PMID:18723601

  11. Re-evaluating the link between brain size and behavioural ecology in primates.

    PubMed

    Powell, Lauren E; Isler, Karin; Barton, Robert A

    2017-10-25

    Comparative studies have identified a wide range of behavioural and ecological correlates of relative brain size, with results differing between taxonomic groups, and even within them. In primates for example, recent studies contradict one another over whether social or ecological factors are critical. A basic assumption of such studies is that with sufficiently large samples and appropriate analysis, robust correlations indicative of selection pressures on cognition will emerge. We carried out a comprehensive re-examination of correlates of primate brain size using two large comparative datasets and phylogenetic comparative methods. We found evidence in both datasets for associations between brain size and ecological variables (home range size, diet and activity period), but little evidence for an effect of social group size, a correlation which has previously formed the empirical basis of the Social Brain Hypothesis. However, reflecting divergent results in the literature, our results exhibited instability across datasets, even when they were matched for species composition and predictor variables. We identify several potential empirical and theoretical difficulties underlying this instability and suggest that these issues raise doubts about inferring cognitive selection pressures from behavioural correlates of brain size. © 2017 The Author(s).

  12. Empirical prediction of peak pressure levels in anthropogenic impulsive noise. Part I: Airgun arrays signals.

    PubMed

    Galindo-Romero, Marta; Lippert, Tristan; Gavrilov, Alexander

    2015-12-01

    This paper presents an empirical linear equation to predict peak pressure level of anthropogenic impulsive signals based on its correlation with the sound exposure level. The regression coefficients are shown to be weakly dependent on the environmental characteristics but governed by the source type and parameters. The equation can be applied to values of the sound exposure level predicted with a numerical model, which provides a significant improvement in the prediction of the peak pressure level. Part I presents the analysis for airgun arrays signals, and Part II considers the application of the empirical equation to offshore impact piling noise.

  13. Developmental Associations between Short-Term Variability and Long-Term Changes: Intraindividual Correlation of Positive and Negative Affect in Daily Life and Cognitive Aging

    ERIC Educational Resources Information Center

    Hülür, Gizem; Hoppmann, Christiane A.; Ram, Nilam; Gerstorf, Denis

    2015-01-01

    Conceptual notions and empirical evidence suggest that the intraindividual correlation (iCorr) of positive affect (PA) and negative affect (NA) is a meaningful characteristic of affective functioning. PA and NA are typically negatively correlated within-person. Previous research has found that the iCorr of PA and NA is relatively stable over time…

  14. Limits of the memory coefficient in measuring correlated bursts

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Hiraoka, Takayuki

    2018-03-01

    Temporal inhomogeneities in event sequences of natural and social phenomena have been characterized in terms of interevent times and correlations between interevent times. The inhomogeneities of interevent times have been extensively studied, while the correlations between interevent times, often called correlated bursts, are far from being fully understood. For measuring the correlated bursts, two relevant approaches were suggested, i.e., memory coefficient and burst size distribution. Here a burst size denotes the number of events in a bursty train detected for a given time window. Empirical analyses have revealed that the larger memory coefficient tends to be associated with the heavier tail of the burst size distribution. In particular, empirical findings in human activities appear inconsistent, such that the memory coefficient is close to 0, while burst size distributions follow a power law. In order to comprehend these observations, by assuming the conditional independence between consecutive interevent times, we derive the analytical form of the memory coefficient as a function of parameters describing interevent time and burst size distributions. Our analytical result can explain the general tendency of the larger memory coefficient being associated with the heavier tail of burst size distribution. We also find that the apparently inconsistent observations in human activities are compatible with each other, indicating that the memory coefficient has limits to measure the correlated bursts.

  15. A simple method for the extraction and identification of light density microplastics from soil.

    PubMed

    Zhang, Shaoliang; Yang, Xiaomei; Gertsen, Hennie; Peters, Piet; Salánki, Tamás; Geissen, Violette

    2018-03-01

    This article introduces a simple and cost-saving method developed to extract, distinguish and quantify light density microplastics of polyethylene (PE) and polypropylene (PP) in soil. A floatation method using distilled water was used to extract the light density microplastics from soil samples. Microplastics and impurities were identified using a heating method (3-5s at 130°C). The number and size of particles were determined using a camera (Leica DFC 425) connected to a microscope (Leica wild M3C, Type S, simple light, 6.4×). Quantification of the microplastics was conducted using a developed model. Results showed that the floatation method was effective in extracting microplastics from soils, with recovery rates of approximately 90%. After being exposed to heat, the microplastics in the soil samples melted and were transformed into circular transparent particles while other impurities, such as organic matter and silicates were not changed by the heat. Regression analysis of microplastics weight and particle volume (a calculation based on image J software analysis) after heating showed the best fit (y=1.14x+0.46, R 2 =99%, p<0.001). Recovery rates based on the empirical model method were >80%. Results from field samples collected from North-western China prove that our method of repetitive floatation and heating can be used to extract, distinguish and quantify light density polyethylene microplastics in soils. Microplastics mass can be evaluated using the empirical model. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Empirical analysis of web-based user-object bipartite networks

    NASA Astrophysics Data System (ADS)

    Shang, Ming-Sheng; Lü, Linyuan; Zhang, Yi-Cheng; Zhou, Tao

    2010-05-01

    Understanding the structure and evolution of web-based user-object networks is a significant task since they play a crucial role in e-commerce nowadays. This letter reports the empirical analysis on two large-scale web sites, audioscrobbler.com and del.icio.us, where users are connected with music groups and bookmarks, respectively. The degree distributions and degree-degree correlations for both users and objects are reported. We propose a new index, named collaborative similarity, to quantify the diversity of tastes based on the collaborative selection. Accordingly, the correlation between degree and selection diversity is investigated. We report some novel phenomena well characterizing the selection mechanism of web users and outline the relevance of these phenomena to the information recommendation problem.

  17. Sexual orientation beliefs: their relationship to anti-gay attitudes and biological determinist arguments.

    PubMed

    Hegarty, P; Pratto, F

    2001-01-01

    Previous studies which have measured beliefs about sexual orientation with either a single item, or a one-dimensional scale are discussed. In the present study beliefs were observed to vary along two dimensions: the "immutability" of sexual orientation and the "fundamentality" of a categorization of persons as heterosexuals and homosexuals. While conceptually related, these two dimensions were empirically distinct on several counts. They were negatively correlated with each other. Condemning attitudes toward lesbians and gay men were correlated positively with fundamentality but negatively with immutability. Immutability, but not fundamentality, affected the assimilation of a biological determinist argument. The relationship between sexual orientation beliefs and anti-gay prejudice is discussed and suggestions for empirical studies of sexual orientation beliefs are presented.

  18. Heterogeneity of Purkinje cell simple spike-complex spike interactions: zebrin- and non-zebrin-related variations.

    PubMed

    Tang, Tianyu; Xiao, Jianqiang; Suh, Colleen Y; Burroughs, Amelia; Cerminara, Nadia L; Jia, Linjia; Marshall, Sarah P; Wise, Andrew K; Apps, Richard; Sugihara, Izumi; Lang, Eric J

    2017-08-01

    Cerebellar Purkinje cells (PCs) generate two types of action potentials, simple and complex spikes. Although they are generated by distinct mechanisms, interactions between the two spike types exist. Zebrin staining produces alternating positive and negative stripes of PCs across most of the cerebellar cortex. Thus, here we compared simple spike-complex spike interactions both within and across zebrin populations. Simple spike activity undergoes a complex modulation preceding and following a complex spike. The amplitudes of the pre- and post-complex spike modulation phases were correlated across PCs. On average, the modulation was larger for PCs in zebrin positive regions. Correlations between aspects of the complex spike waveform and simple spike activity were found, some of which varied between zebrin positive and negative PCs. The implications of the results are discussed with regard to hypotheses that complex spikes are triggered by rises in simple spike activity for either motor learning or homeostatic functions. Purkinje cells (PCs) generate two types of action potentials, called simple and complex spikes (SSs and CSs). We first investigated the CS-associated modulation of SS activity and its relationship to the zebrin status of the PC. The modulation pattern consisted of a pre-CS rise in SS activity, and then, following the CS, a pause, a rebound, and finally a late inhibition of SS activity for both zebrin positive (Z+) and negative (Z-) cells, though the amplitudes of the phases were larger in Z+ cells. Moreover, the amplitudes of the pre-CS rise with the late inhibitory phase of the modulation were correlated across PCs. In contrast, correlations between modulation phases across CSs of individual PCs were generally weak. Next, the relationship between CS spikelets and SS activity was investigated. The number of spikelets/CS correlated with the average SS firing rate only for Z+ cells. In contrast, correlations across CSs between spikelet numbers and the amplitudes of the SS modulation phases were generally weak. Division of spikelets into likely axonally propagated and non-propagated groups (based on their interspikelet interval) showed that the correlation of spikelet number with SS firing rate primarily reflected a relationship with non-propagated spikelets. In sum, the results show both zebrin-related and non-zebrin-related physiological heterogeneity in SS-CS interactions among PCs, which suggests that the cerebellar cortex is more functionally diverse than is assumed by standard theories of cerebellar function. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.

  19. The Nature of Procrastination: A Meta-Analytic and Theoretical Review of Quintessential Self-Regulatory Failure

    ERIC Educational Resources Information Center

    Steel, Piers

    2007-01-01

    Procrastination is a prevalent and pernicious form of self-regulatory failure that is not entirely understood. Hence, the relevant conceptual, theoretical, and empirical work is reviewed, drawing upon correlational, experimental, and qualitative findings. A meta-analysis of procrastination's possible causes and effects, based on 691 correlations,…

  20. Introducing Scale Analysis by Way of a Pendulum

    ERIC Educational Resources Information Center

    Lira, Ignacio

    2007-01-01

    Empirical correlations are a practical means of providing approximate answers to problems in physics whose exact solution is otherwise difficult to obtain. The correlations relate quantities that are deemed to be important in the physical situation to which they apply, and can be derived from experimental data by means of dimensional and/or scale…

  1. Success Avoidant Motivation and Behavior; Its Development Correlates and Situational Determinants. Final Report.

    ERIC Educational Resources Information Center

    Horner, Matina S.

    This paper reports on a successful attempt to understand success avoidant motivation and behavior by the development of an empirically sophisticated scoring system of success avoidant motivation and the observation of its behavioral correlates and situational determinants. Like most of the work on achievement motivation, the study was carried out…

  2. Large-Scale Studies on the Transferability of General Problem-Solving Skills and the Pedagogic Potential of Physics

    ERIC Educational Resources Information Center

    Mashood, K. K.; Singh, Vijay A.

    2013-01-01

    Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…

  3. Correlates of the MMPI-2-RF in a College Setting

    ERIC Educational Resources Information Center

    Forbey, Johnathan D.; Lee, Tayla T. C.; Handel, Richard W.

    2010-01-01

    The current study examined empirical correlates of scores on Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; A. Tellegen & Y. S. Ben-Porath, 2008; Y. S. Ben-Porath & A. Tellegen, 2008) scales in a college setting. The MMPI-2-RF and six criterion measures (assessing anger, assertiveness, sex roles, cognitive…

  4. Non-empirical exchange-correlation parameterizations based on exact conditions from correlated orbital theory.

    PubMed

    Haiduke, Roberto Luiz A; Bartlett, Rodney J

    2018-05-14

    Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.

  5. Non-empirical exchange-correlation parameterizations based on exact conditions from correlated orbital theory

    NASA Astrophysics Data System (ADS)

    Haiduke, Roberto Luiz A.; Bartlett, Rodney J.

    2018-05-01

    Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.

  6. Disorders without borders: current and future directions in the meta-structure of mental disorders.

    PubMed

    Carragher, Natacha; Krueger, Robert F; Eaton, Nicholas R; Slade, Tim

    2015-03-01

    Classification is the cornerstone of clinical diagnostic practice and research. However, the extant psychiatric classification systems are not well supported by research evidence. In particular, extensive comorbidity among putatively distinct disorders flags an urgent need for fundamental changes in how we conceptualize psychopathology. Over the past decade, research has coalesced on an empirically based model that suggests many common mental disorders are structured according to two correlated latent dimensions: internalizing and externalizing. We review and discuss the development of a dimensional-spectrum model which organizes mental disorders in an empirically based manner. We also touch upon changes in the DSM-5 and put forward recommendations for future research endeavors. Our review highlights substantial empirical support for the empirically based internalizing-externalizing model of psychopathology, which provides a parsimonious means of addressing comorbidity. As future research goals, we suggest that the field would benefit from: expanding the meta-structure of psychopathology to include additional disorders, development of empirically based thresholds, inclusion of a developmental perspective, and intertwining genomic and neuroscience dimensions with the empirical structure of psychopathology.

  7. Causes of coal-miner absenteeism. Information Circular/1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, R.H.; Randolph, R.F.

    The Bureau of Mines report describes several significant problems associated with absenteeism among underground coal miners. The vast empirical literature on employee absenteeism is reviewed, and a conceptual model of the factors that cause absenteeism among miners is presented. Portions of the model were empirically tested by performing correlational and multiple regression analyses on data collected from a group of 64 underground coal miners. The results of these tests are presented and discussed.

  8. GPS-Derived Precipitable Water Compared with the Air Force Weather Agency’s MM5 Model Output

    DTIC Science & Technology

    2002-03-26

    and less then 100 sensors are available throughout Europe . While the receiver density is currently comparable to the upper-air sounding network...profiles from 38 upper air sites throughout Europe . Based on these empirical formulae and simplifications, Bevis (1992) has determined that the error...Alaska using Bevis’ (1992) empirical correlation based on 8718 radiosonde calculations over 2 years. Other studies have been conducted in Europe and

  9. Perceived sexual harassment at work: meta-analysis and structural model of antecedents and consequences.

    PubMed

    Topa Cantisano, Gabriela; Morales Domínguez, J F; Depolo, Marco

    2008-05-01

    Although sexual harassment has been extensively studied, empirical research has not led to firm conclusions about its antecedents and consequences, both at the personal and organizational level. An extensive literature search yielded 42 empirical studies with 60 samples. The matrix correlation obtained through meta-analytic techniques was used to test a structural equation model. Results supported the hypotheses regarding organizational environmental factors as main predictors of harassment.

  10. Do foreign exchange and equity markets co-move in Latin American region? Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Bashir, Usman; Yu, Yugang; Hussain, Muntazir; Zebende, Gilney F.

    2016-11-01

    This paper investigates the dynamics of the relationship between foreign exchange markets and stock markets through time varying co-movements. In this sense, we analyzed the time series monthly of Latin American countries for the period from 1991 to 2015. Furthermore, we apply Granger causality to verify the direction of causality between foreign exchange and stock market and detrended cross-correlation approach (ρDCCA) for any co-movements at different time scales. Our empirical results suggest a positive cross correlation between exchange rate and stock price for all Latin American countries. The findings reveal two clear patterns of correlation. First, Brazil and Argentina have positive correlation in both short and long time frames. Second, the remaining countries are negatively correlated in shorter time scale, gradually moving to positive. This paper contributes to the field in three ways. First, we verified the co-movements of exchange rate and stock prices that were rarely discussed in previous empirical studies. Second, ρDCCA coefficient is a robust and powerful methodology to measure the cross correlation when dealing with non stationarity of time series. Third, most of the studies employed one or two time scales using co-integration and vector autoregressive approaches. Not much is known about the co-movements at varying time scales between foreign exchange and stock markets. ρDCCA coefficient facilitates the understanding of its explanatory depth.

  11. Quantitative genetic versions of Hamilton's rule with empirical applications

    PubMed Central

    McGlothlin, Joel W.; Wolf, Jason B.; Brodie, Edmund D.; Moore, Allen J.

    2014-01-01

    Hamilton's theory of inclusive fitness revolutionized our understanding of the evolution of social interactions. Surprisingly, an incorporation of Hamilton's perspective into the quantitative genetic theory of phenotypic evolution has been slow, despite the popularity of quantitative genetics in evolutionary studies. Here, we discuss several versions of Hamilton's rule for social evolution from a quantitative genetic perspective, emphasizing its utility in empirical applications. Although evolutionary quantitative genetics offers methods to measure each of the critical parameters of Hamilton's rule, empirical work has lagged behind theory. In particular, we lack studies of selection on altruistic traits in the wild. Fitness costs and benefits of altruism can be estimated using a simple extension of phenotypic selection analysis that incorporates the traits of social interactants. We also discuss the importance of considering the genetic influence of the social environment, or indirect genetic effects (IGEs), in the context of Hamilton's rule. Research in social evolution has generated an extensive body of empirical work focusing—with good reason—almost solely on relatedness. We argue that quantifying the roles of social and non-social components of selection and IGEs, in addition to relatedness, is now timely and should provide unique additional insights into social evolution. PMID:24686930

  12. A study of a diffusive model of asset returns and an empirical analysis of financial markets

    NASA Astrophysics Data System (ADS)

    Alejandro Quinones, Angel Luis

    A diffusive model for market dynamics is studied and the predictions of the model are compared to real financial markets. The model has a non-constant diffusion coefficient which depends both on the asset value and the time. A general solution for the distribution of returns is obtained and shown to match the results of computer simulations for two simple cases, piecewise linear and quadratic diffusion. The effects of discreteness in the market dynamics on the model are also studied. For the quadratic diffusion case, a type of phase transition leading to fat tails is observed as the discrete distribution approaches the continuum limit. It is also found that the model captures some of the empirical stylized facts observed in real markets, including fat-tails and scaling behavior in the distribution of returns. An analysis of empirical data for the EUR/USD currency exchange rate and the S&P 500 index is performed. Both markets show time scaling behavior consistent with a value of 1/2 for the Hurst exponent. Finally, the results show that the distribution of returns for the two markets is well fitted by the model, and the corresponding empirical diffusion coefficients are determined.

  13. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  14. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    NASA Astrophysics Data System (ADS)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2018-03-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  15. Statistical Mechanics of the US Supreme Court

    NASA Astrophysics Data System (ADS)

    Lee, Edward D.; Broedersz, Chase P.; Bialek, William

    2015-07-01

    We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The maximum entropy model consistent with the observed pairwise correlations among justices' votes, an Ising spin glass, agrees quantitatively with the data. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering the intuition that ideologically opposite justices negatively influence each another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, organizing the voting patterns in a relatively simple "energy landscape." Besides unanimity, other energy minima in this landscape, or maxima in probability, correspond to prototypical voting states, such as the ideological split or a tightly correlated, conservative core. The model correctly predicts the correlation of justices with the majority and gives us a measure of their influence on the majority decision. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context.

  16. Do Not Fear Your Opponent: Suboptimal Changes of a Prevention Strategy when Facing Stronger Opponents

    ERIC Educational Resources Information Center

    Slezak, Diego Fernandez; Sigman, Mariano

    2012-01-01

    The time spent making a decision and its quality define a widely studied trade-off. Some models suggest that the time spent is set to optimize reward, as verified empirically in simple-decision making experiments. However, in a more complex perspective compromising components of regulation focus, ambitions, fear, risk and social variables,…

  17. Describing dengue epidemics: Insights from simple mechanistic models

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2012-09-01

    We present a set of nested models to be applied to dengue fever epidemiology. We perform a qualitative study in order to show how much complexity we really need to add into epidemiological models to be able to describe the fluctuations observed in empirical dengue hemorrhagic fever incidence data offering a promising perspective on inference of parameter values from dengue case notifications.

  18. Rational and Empirical Play in the Simple Hot Potato Game

    ERIC Educational Resources Information Center

    Butts, Carter T.; Rode, David C.

    2007-01-01

    We define a "hot potato" to be a good that may be traded a finite number of times, but which becomes a bad if and when it can no longer be exchanged. We describe a game involving such goods, and show that non-acceptance is a unique subgame perfect Nash equilibrium for rational egoists. Contrastingly, experiments with human subjects show…

  19. Are more complex physiological models of forest ecosystems better choices for plot and regional predictions?

    Treesearch

    Wenchi Jin; Hong S. He; Frank R. Thompson

    2016-01-01

    Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...

  20. Postmodeling Sensitivity Analysis to Detect the Effect of Missing Data Mechanisms

    ERIC Educational Resources Information Center

    Jamshidian, Mortaza; Mata, Matthew

    2008-01-01

    Incomplete or missing data is a common problem in almost all areas of empirical research. It is well known that simple and ad hoc methods such as complete case analysis or mean imputation can lead to biased and/or inefficient estimates. The method of maximum likelihood works well; however, when the missing data mechanism is not one of missing…

  1. Indicators as Judgment Devices: An Empirical Study of Citizen Bibliometrics in Research Evaluation

    ERIC Educational Resources Information Center

    Hammarfelt, Björn; Rushforth, Alexander D.

    2017-01-01

    A researcher's number of publications has been a fundamental merit in the competition for academic positions since the late 18th century. Today, the simple counting of publications has been supplemented with a whole range of bibliometric indicators, which supposedly not only measures the volume of research but also its impact. In this study, we…

  2. Brief communication: Legionnaire's disease successfully treated in acute myelocytic leukemia during severe neutropenia.

    PubMed

    Guthrie, T H; Mahizhnan, P

    1983-01-01

    A patient with acute nonlymphocytic leukemia developed progressive lung infiltrates and unremitting fevers during a profound neutropenic state. Legionnaire's disease was diagnosed by simple immunologic studies and successfully treated with erythromycin. This index case alerts physicians toward a treatable infection which would not normally be susceptible to the empiric antibiotic regimens given neutropenic patients with fevers.

  3. Science support for the Earth radiation budget sensor on the Nimbus-7 spacecraft

    NASA Technical Reports Server (NTRS)

    Ingersoll, A. P.

    1982-01-01

    Experimental data supporting the Earth radiation budget sensor on the Nimbus 7 Satellite is given. The data deals with the empirical relations between radiative flux, cloudiness, and other meteorological parameters; response of a zonal climate ice sheet model to the orbital perturbations during the quaternary ice ages; and a simple parameterization for ice sheet ablation rate.

  4. Measuring water and sediment discharge from a road plot with a settling basin and tipping bucket

    Treesearch

    Thomas A. Black; Charles H. Luce

    2013-01-01

    A simple empirical method quantifies water and sediment production from a forest road surface, and is well suited for calibration and validation of road sediment models. To apply this quantitative method, the hydrologic technician installs bordered plots on existing typical road segments and measures coarse sediment production in a settling tank. When a tipping bucket...

  5. An Empirical Test of Oklahoma's A-F School Grades

    ERIC Educational Resources Information Center

    Adams, Curt M.; Forsyth, Patrick B.; Ware, Jordan; Mwavita, Mwarumba; Barnes, Laura L.; Khojasteb, Jam

    2016-01-01

    Oklahoma is one of 16 states electing to use an A-F letter grade as an indicator of school quality. On the surface, letter grades are an attractive policy instrument for school improvement; they are seemingly clear, simple, and easy to interpret. Evidence, however, on the use of letter grades as an instrument to rank and improve schools is scant…

  6. Mathematics Curriculum Based Measurement to Predict State Test Performance: A Comparison of Measures and Methods

    ERIC Educational Resources Information Center

    Stevens, Olinger; Leigh, Erika

    2012-01-01

    Scope and Method of Study: The purpose of the study is to use an empirical approach to identify a simple, economical, efficient, and technically adequate performance measure that teachers can use to assess student growth in mathematics. The current study has been designed to expand the body of research for math CBM to further examine technical…

  7. Scrutinizing A Survey-Based Measure of Science and Mathematics Teacher Knowledge: Relationship to Observations of Teaching Practice

    ERIC Educational Resources Information Center

    Talbot, Robert M., III

    2017-01-01

    There is a clear need for valid and reliable instrumentation that measures teacher knowledge. However, the process of investigating and making a case for instrument validity is not a simple undertaking; rather, it is a complex endeavor. This paper presents the empirical case of one aspect of such an instrument validation effort. The particular…

  8. Survey Response-Related Biases in Contingent Valuation: Concepts, Remedies, and Empirical Application to Valuing Aquatic Plant Management

    Treesearch

    Mark L. Messonnier; John C. Bergstrom; Chrisopher M. Cornwell; R. Jeff Teasley; H. Ken Cordell

    2000-01-01

    Simple nonresponse and selection biases that may occur in survey research such as contingent valuation applications are discussed and tested. Correction mechanisms for these types of biases are demonstrated. Results indicate the importance of testing and correcting for unit and item nonresponse bias in contingent valuation survey data. When sample nonresponse and...

  9. Microscopic study reveals the singular origins of growth

    NASA Astrophysics Data System (ADS)

    Yaari, G.; Nowak, A.; Rakocy, K.; Solomon, S.

    2008-04-01

    Anderson [Science 177, 293 (1972)] proposed the concept of complexity in order to describe the emergence and growth of macroscopic collective patterns out of the simple interactions of many microscopic agents. In the physical sciences this paradigm was implemented systematically and confirmed repeatedly by successful confrontation with reality. In the social sciences however, the possibilities to stage experiments to validate it are limited. During the 90's a series of dramatic political and economic events have provided the opportunity to do so. We exploit the resulting empirical evidence to validate a simple agent based alternative to the classical logistic dynamics. The post-liberalization empirical data from Poland confirm the theoretical prediction that the dynamics is dominated by singular rare events which insure the resilience and adaptability of the system. We have shown that growth is led by few singular “growth centers" (Fig. 1), that initially developed at a tremendous rate (Fig. 3), followed by a diffusion process to the rest of the country and leading to a positive growth rate uniform across the counties. In addition to the interdisciplinary unifying potential of our generic formal approach, the present work reveals the strong causal ties between the “softer" social conditions and their “hard" economic consequences.

  10. Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.

    PubMed

    Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N

    2017-04-01

    There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.

  11. Lab and Pore-Scale Study of Low Permeable Soils Diffusional Tortuosity

    NASA Astrophysics Data System (ADS)

    Lekhov, V.; Pozdniakov, S. P.; Denisova, L.

    2016-12-01

    Diffusion plays important role in contaminant spreading in low permeable units. The effective diffusion coefficient of saturated porous medium depends on this coefficient in water, porosity and structural parameter of porous space - tortuosity. Theoretical models of relationship between porosity and diffusional tortuosity are usually derived for conceptual granular models of medium filled by solid particles of simple geometry. These models usually do not represent soils with complex microstructure. The empirical models, like as Archie's law, based on the experimental electrical conductivity data are mostly useful for practical applications. Such models contain empirical parameters that should be defined experimentally for given soil type. In this work, we compared tortuosity values obtained in lab-scale diffusional experiments and pore scale diffusion simulation for the studied soil microstructure and exanimated relationship between tortuosity and porosity. Samples for the study were taken from borehole cores of low-permeable silt-clay formation. Using the samples of 50 cm3 we performed lab scale diffusional experiments and estimated the lab-scale tortuosity. Next using these samples we studied the microstructure with X-ray microtomograph. Shooting performed on undisturbed microsamples of size 1,53 mm with a resolution ×300 (10243 vox). After binarization of each obtained 3-D structure, its spatial correlation analysis was performed. This analysis showed that the spatial correlation scale of the indicator variogram is considerably smaller than microsample length. Then there was the numerical simulation of the Laplace equation with binary coefficients for each microsamples. The total number of simulations at the finite-difference grid of 1753 cells was 3500. As a result the effective diffusion coefficient, tortuosity and porosity values were obtained for all studied microsamples. The results were analyzed in the form of graph of tortuosity versus porosity. The 6 experimental tortuosity values well agree with pore-scale simulations falling in the general pattern that shows nonlinear decreasing of tortuosity with decreasing of porosity. Fitting this graph by Archie model we found exponent value in the range between 1,8 and 2,4. This work was supported by RFBR via grant 14-05-00409.

  12. Habitual instigation and habitual execution: Definition, measurement, and effects on behaviour frequency.

    PubMed

    Gardner, Benjamin; Phillips, L Alison; Judah, Gaby

    2016-09-01

    'Habit' is a process whereby situational cues generate behaviour automatically, via activation of learned cue-behaviour associations. This article presents a conceptual and empirical rationale for distinguishing between two manifestations of habit in health behaviour, triggering selection and initiation of an action ('habitual instigation'), or automating progression through subactions required to complete action ('habitual execution'). We propose that habitual instigation accounts for habit-action relationships, and is the manifestation captured by the Self-Report Habit Index (SRHI), the dominant measure in health psychology. Conceptual analysis and prospective survey. Student participants (N = 229) completed measures of intentions, the original, non-specific SRHI, an instigation-specific SRHI variant, an execution-specific variant, and, 1 week later, behaviour, in three health domains (flossing, snacking, and breakfast consumption). Effects of habitual instigation and execution on behaviour were modelled using regression analyses, with simple slope analysis to test habit-intention interactions. Relationships between instigation, execution, and non-specific SRHI variants were assessed via correlations and factor analyses. The instigation-SRHI was uniformly more predictive of behaviour frequency than the execution-SRHI and corresponded more closely with the original SRHI in correlation and factor analyses. Further, experimental work is needed to separate the impact of the two habit manifestations more rigorously. Nonetheless, findings qualify calls for habit-based interventions by suggesting that behaviour maintenance may be better served by habitual instigation and that disrupting habitual behaviour may depend on overriding habits of instigation. Greater precision of measurement may help to minimize confusion between habitual instigation and execution. Statement of contribution What is already known on this subject? Habit is often used to understand, explain, and change health behaviour. Making behaviour habitual has been proposed as a means of maintaining behaviour change. Concerns have been raised about the extent to which health behaviour can be habitual. What does this study add? A conceptual and empirical rationale for discerning habitually instigated and habitually executed behaviour. Results show habit-behaviour effects are mostly attributable to habitual instigation, not execution. The most common habit measure, the Self-Report Habit Index, measures habitual instigation, not execution. © 2016 The British Psychological Society.

  13. Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.

    PubMed

    Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S

    2012-11-01

    One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.

  14. From damselflies to pterosaurs: how burst and sustainable flight performance scale with size.

    PubMed

    Marden, J H

    1994-04-01

    Recent empirical data for short-burst lift and power production of flying animals indicate that mass-specific lift and power output scale independently (lift) or slightly positively (power) with increasing size. These results contradict previous theory, as well as simple observation, which argues for degradation of flight performance with increasing size. Here, empirical measures of lift and power during short-burst exertion are combined with empirically based estimates of maximum muscle power output in order to predict how burst and sustainable performance scale with body size. The resulting model is used to estimate performance of the largest extant flying birds and insects, along with the largest flying animals known from fossils. These estimates indicate that burst flight performance capacities of even the largest extinct fliers (estimated mass 250 kg) would allow takeoff from the ground; however, limitations on sustainable power output should constrain capacity for continuous flight at body sizes exceeding 0.003-1.0 kg, depending on relative wing length and flight muscle mass.

  15. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Rule induction performance in amnestic mild cognitive impairment and Alzheimer's dementia: examining the role of simple and biconditional rule learning processes.

    PubMed

    Oosterman, Joukje M; Heringa, Sophie M; Kessels, Roy P C; Biessels, Geert Jan; Koek, Huiberdina L; Maes, Joseph H R; van den Berg, Esther

    2017-04-01

    Rule induction tests such as the Wisconsin Card Sorting Test require executive control processes, but also the learning and memorization of simple stimulus-response rules. In this study, we examined the contribution of diminished learning and memorization of simple rules to complex rule induction test performance in patients with amnestic mild cognitive impairment (aMCI) or Alzheimer's dementia (AD). Twenty-six aMCI patients, 39 AD patients, and 32 control participants were included. A task was used in which the memory load and the complexity of the rules were independently manipulated. This task consisted of three conditions: a simple two-rule learning condition (Condition 1), a simple four-rule learning condition (inducing an increase in memory load, Condition 2), and a complex biconditional four-rule learning condition-inducing an increase in complexity and, hence, executive control load (Condition 3). Performance of AD patients declined disproportionately when the number of simple rules that had to be memorized increased (from Condition 1 to 2). An additional increment in complexity (from Condition 2 to 3) did not, however, disproportionately affect performance of the patients. Performance of the aMCI patients did not differ from that of the control participants. In the patient group, correlation analysis showed that memory performance correlated with Condition 1 performance, whereas executive task performance correlated with Condition 2 performance. These results indicate that the reduced learning and memorization of underlying task rules explains a significant part of the diminished complex rule induction performance commonly reported in AD, although results from the correlation analysis suggest involvement of executive control functions as well. Taken together, these findings suggest that care is needed when interpreting rule induction task performance in terms of executive function deficits in these patients.

  17. Content, Social, and Metacognitive Statements: An Empirical Study Comparing Human-Human and Human-Computer Tutorial Dialogue

    DTIC Science & Technology

    2010-01-01

    for each participant using the formula gain = ( posttest − pretest )/(1− pretest ). 6.2 Content-Learning Correlations The summary of language statistics...differences also affect which factors are correlated with learning gain and user satisfaction. We argue that ITS designers should pay particular...factors are correlated with learning gain and user satisfaction. We argue that ITS designers should pay particular attention to strategies for dealing

  18. On the wrong inference of long-range correlations in climate data; the case of the solar and volcanic forcing over the Tropical Pacific

    NASA Astrophysics Data System (ADS)

    Varotsos, Costas A.; Efstathiou, Maria N.

    2017-05-01

    A substantial weakness of several climate studies on long-range dependence is the conclusion of long-term memory of the climate conditions, without considering it necessary to establish the power-law scaling and to reject a simple exponential decay of the autocorrelation function. We herewith show one paradigmatic case, where a strong long-range dependence could be wrongly inferred from incomplete data analysis. We firstly apply the DFA method on the solar and volcanic forcing time series over the tropical Pacific, during the past 1000 years and the results obtained show that a statistically significant straight line fit to the fluctuation function in a log-log representation is revealed with slope higher than 0.5, which wrongly may be assumed as an indication of persistent long-range correlations in the time series. We argue that the long-range dependence cannot be concluded just from this straight line fit, but it requires the fulfilment of the two additional prerequisites i.e. reject the exponential decay of the autocorrelation function and establish the power-law scaling. In fact, the investigation of the validity of these prerequisites showed that the DFA exponent higher than 0.5 does not justify the existence of persistent long-range correlations in the temporal evolution of the solar and volcanic forcing during last millennium. In other words, we show that empirical analyses, based on these two prerequisites must not be considered as panacea for a direct proof of scaling, but only as evidence that the scaling hypothesis is plausible. We also discuss the scaling behaviour of solar and volcanic forcing data based on the Haar tool, which recently proved its ability to reliably detect the existence of the scaling effect in climate series.

  19. Counting or Chunking?

    PubMed Central

    Spotorno, Nicola; McMillan, Corey T.; Powers, John P.; Clark, Robin; Grossman, Murray

    2014-01-01

    A growing amount of empirical data is showing that the ability to manipulate quantities in a precise and efficient fashion is rooted in cognitive mechanisms devoted to specific aspects of numbers processing. The Analog number system (ANS) has a reasonable representation of quantities up to about 4, and represents larger quantities on the basis of a numerical ratio between quantities. In order to represent the precise cardinality of a number, the ANS may be supported by external algorithms such as language, leading to a “Precise Number System”. In the setting of limited language, other number-related systems can appear. For example the Parallel Individuation system (PIS) supports a “chunking mechanism” that clusters units of larger numerosities into smaller subsets. In the present study we investigated number processing in non-aphasic patients with Corticobasal Syndrome (CBS) and Posterior Cortical Atrophy (PCA), two neurodegenerative conditions that are associated with progressive parietal atrophy. The present study investigated these number systems in CBS and PCA by assessing the property of the ANS associated with smaller and larger numerosities, and the chunking property of the PIS. The results revealed that CBS/PCA patients are impaired in simple calculations (e.g., addition and subtraction) and that their performance strongly correlates with the size of the numbers involved in these calculations, revealing a clear magnitude effect. This magnitude effect correlated with gray matter atrophy in parietal regions. Moreover, a numeral-dots transcoding task showed that CBS/PCA patients are able to take advantage of clustering in the spatial distribution of the dots of the array. The relative advantage associated with chunking compared to a random spatial distribution correlated with both parietal and prefrontal regions. These results shed light on the properties of systems for representing number knowledge in non-aphasic patients with CBS and PCA. PMID:25278132

  20. A Simplified Approach for Simultaneous Measurements of Wavefront Velocity and Curvature in the Heart Using Activation Times.

    PubMed

    Mazeh, Nachaat; Haines, David E; Kay, Matthew W; Roth, Bradley J

    2013-12-01

    The velocity and curvature of a wave front are important factors governing the propagation of electrical activity through cardiac tissue, particularly during heart arrhythmias of clinical importance such as fibrillation. Presently, no simple computational model exists to determine these values simultaneously. The proposed model uses the arrival times at four or five sites to determine the wave front speed ( v ), direction (θ), and radius of curvature (ROC) ( r 0 ). If the arrival times are measured, then v , θ, and r 0 can be found from differences in arrival times and the distance between these sites. During isotropic conduction, we found good correlation between measured values of the ROC r 0 and the distance from the unipolar stimulus ( r = 0.9043 and p < 0.0001). The conduction velocity (m/s) was correlated ( r = 0.998, p < 0.0001) using our method (mean = 0.2403, SD = 0.0533) and an empirical method (mean = 0.2352, SD = 0.0560). The model was applied to a condition of anisotropy and a complex case of reentry with a high voltage extra stimulus. Again, results show good correlation between our simplified approach and established methods for multiple wavefront morphologies. In conclusion, insignificant measurement errors were observed between this simplified approach and an approach that was more computationally demanding. Accuracy was maintained when the requirement that ε (ε = b/r 0 , ratio of recording site spacing over wave fronts ROC) was between 0.001 and 0.5. The present simplified model can be applied to a variety of clinical conditions to predict behavior of planar, elliptical, and reentrant wave fronts. It may be used to study the genesis and propagation of rotors in human arrhythmias and could lead to rotor mapping using low density endocardial recording electrodes.

  1. Downstream aggradation owing to lava dome extrusion and rainfall runoff at Volcán Santiaguito, Guatemala

    USGS Publications Warehouse

    Harris, Andrew J. L.; Vallance, James W.; Kimberly, Paul; Rose, William I.; Matías, Otoniel; Bunzendahl, Elly; Flynn, Luke P.; Garbeil, Harold

    2006-01-01

    Persistent lava extrusion at the Santiaguito dome complex (Guatemala) results in continuous lahar activity and river bed aggradation downstream of the volcano. We present a simple method that uses vegetation indices extracted from Landsat Thematic Mapper (TM) data to map impacted zones. Application of this technique to a time series of 21 TM images acquired between 1987 and 2000 allow us to map, measure, and track temporal and spatial variations in the area of lahar impact and river aggradation.In the proximal zone of the fluvial system, these data show a positive correlation between extrusion rate at Santiaguito (E), aggradation area 12 months later (Aprox), and rainfall during the intervening 12 months (Rain12): Aprox=3.92+0.50 E+0.31 ln(Rain12) (r2=0.79). This describes a situation in which an increase in sediment supply (extrusion rate) and/or a means to mobilize this sediment (rainfall) results in an increase in lahar activity (aggraded area). Across the medial zone, we find a positive correlation between extrusion rate and/or area of proximal aggradation and medial aggradation area (Amed): Amed=18.84-0.05 Aprox - 6.15 Rain12 (r2=0.85). Here the correlation between rainfall and aggradation area is negative. This describes a situation in which increased sediment supply results in an increase in lahar activity but, because it is the zone of transport, an increase in rainfall serves to increase the transport efficiency of rivers flowing through this zone. Thus, increased rainfall flushes the medial zone of sediment.These quantitative data allow us to empirically define the links between sediment supply and mobilization in this fluvial system and to derive predictive relationships that use rainfall and extrusion rates to estimate aggradation area 12 months hence.

  2. The London handicap scale: a re-evaluation of its validity using standard scoring and simple summation.

    PubMed

    Jenkinson, C; Mant, J; Carter, J; Wade, D; Winner, S

    2000-03-01

    To assess the validity of the London handicap scale (LHS) using a simple unweighted scoring system compared with traditional weighted scoring 323 patients admitted to hospital with acute stroke were followed up by interview 6 months after their stroke as part of a trial looking at the impact of a family support organiser. Outcome measures included the six item LHS, the Dartmouth COOP charts, the Frenchay activities index, the Barthel index, and the hospital anxiety and depression scale. Patients' handicap score was calculated both using the standard procedure (with weighting) for the LHS, and using a simple summation procedure without weighting (U-LHS). Construct validity of both LHS and U-LHS was assessed by testing their correlations with the other outcome measures. Cronbach's alpha for the LHS was 0.83. The U-LHS was highly correlated with the LHS (r=0.98). Correlation of U-LHS with the other outcome measures gave very similar results to correlation of LHS with these measures. Simple summation scoring of the LHS does not lead to any change in the measurement properties of the instrument compared with standard weighted scoring. Unweighted scores are easier to calculate and interpret, so it is recommended that these are used.

  3. The realities of risk, the nature of hope, and the role of science: A response to Cook and VandeCreek.

    PubMed

    Rudd, M David; Joiner, Thomas; Brown, Gregory K; Cukrowicz, Kelly; Jobes, David A; Silverman, Morton

    2009-12-01

    A response is offered to the critiques of both Cook and VandeCreek. Among the points emphasized are the simple realities of risk with suicidal patients, existing empirical research with informed consent in both clinical psychology and other health care areas, as well as the persistence of common myths in clinical practice with suicidal patients. Although empirical science provides a firm foundation to much of what is proposed, it is critical for practitioners to recognize and respond to the ethical demands for openness and transparency with high-risk clients in an effort to achieve shared responsibility in care. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  4. From sparse to dense and from assortative to disassortative in online social networks

    PubMed Central

    Li, Menghui; Guan, Shuguang; Wu, Chensheng; Gong, Xiaofeng; Li, Kun; Wu, Jinshan; Di, Zengru; Lai, Choy-Heng

    2014-01-01

    Inspired by the analysis of several empirical online social networks, we propose a simple reaction-diffusion-like coevolving model, in which individuals are activated to create links based on their states, influenced by local dynamics and their own intention. It is shown that the model can reproduce the remarkable properties observed in empirical online social networks; in particular, the assortative coefficients are neutral or negative, and the power law exponents γ are smaller than 2. Moreover, we demonstrate that, under appropriate conditions, the model network naturally makes transition(s) from assortative to disassortative, and from sparse to dense in their characteristics. The model is useful in understanding the formation and evolution of online social networks. PMID:24798703

  5. From sparse to dense and from assortative to disassortative in online social networks.

    PubMed

    Li, Menghui; Guan, Shuguang; Wu, Chensheng; Gong, Xiaofeng; Li, Kun; Wu, Jinshan; Di, Zengru; Lai, Choy-Heng

    2014-05-06

    Inspired by the analysis of several empirical online social networks, we propose a simple reaction-diffusion-like coevolving model, in which individuals are activated to create links based on their states, influenced by local dynamics and their own intention. It is shown that the model can reproduce the remarkable properties observed in empirical online social networks; in particular, the assortative coefficients are neutral or negative, and the power law exponents γ are smaller than 2. Moreover, we demonstrate that, under appropriate conditions, the model network naturally makes transition(s) from assortative to disassortative, and from sparse to dense in their characteristics. The model is useful in understanding the formation and evolution of online social networks.

  6. An Empirical Model of the Variation of the Solar Lyman-α Spectral Irradiance

    NASA Astrophysics Data System (ADS)

    Kretzschmar, Matthieu; Snow, Martin; Curdt, Werner

    2018-03-01

    We propose a simple model that computes the spectral profile of the solar irradiance in the hydrogen Lyman alpha line, H Ly-α (121.567 nm), from 1947 to present. Such a model is relevant for the study of many astronomical environments, from planetary atmospheres to interplanetary medium. This empirical model is based on the SOlar Heliospheric Observatory/Solar Ultraviolet Measurement of Emitted Radiation observations of the Ly-α irradiance over solar cycle 23 and the Ly-α disk-integrated irradiance composite. The model reproduces the temporal variability of the spectral profile and matches the independent SOlar Radiation and Climate Experiment/SOLar-STellar Irradiance Comparison Experiment spectral observations from 2003 to 2007 with an accuracy better than 10%.

  7. Childhood Traumatic Grief: A Multi-Site Empirical Examination of the Construct and Its Correlates

    ERIC Educational Resources Information Center

    Brown, Elissa J.; Amaya-Jackson, Lisa; Cohen, Judith; Handel, Stephanie; De Bocanegra, Heike Thiel; Zatta, Eileen; Goodman, Robin F.; Mannarino, Anthony

    2008-01-01

    This study evaluated the construct of childhood traumatic grief (CTG) and its correlates through a multi-site assessment of 132 bereaved children and adolescents. Youth completed a new measure of the characteristics, attributions, and reactions to exposure to death (CARED), as well as measures of CTG, posttraumatic stress disorder (PTSD),…

  8. Prevalence and Socio-Demographic Correlates of Psychological Distress among Students at an Australian University

    ERIC Educational Resources Information Center

    Larcombe, Wendy; Finch, Sue; Sore, Rachel; Murray, Christina M.; Kentish, Sandra; Mulder, Raoul A.; Lee-Stecum, Parshia; Baik, Chi; Tokatlidis, Orania; Williams, David A.

    2016-01-01

    This research contributes to the empirical literature on university student mental well-being by investigating the prevalence and socio-demographic correlates of severe levels of psychological distress. More than 5000 students at a metropolitan Australian university participated in an anonymous online survey in 2013 that included the short form of…

  9. Correlates of Conduct Problems and Depression Comorbidity in Elementary School Boys and Girls Receiving Special Educational Services

    ERIC Educational Resources Information Center

    Poirier, Martine; Déry, Michèle; Toupin, Jean; Verlaan, Pierrette; Lemelin, Jean-Pascal; Jagiellowicz, Jadzia

    2015-01-01

    There is limited empirical research on the correlates of conduct problems (CP) and depression comorbidity during childhood. This study investigated 479 elementary school children (48.2% girls). It compared children with comorbidity to children with CP only, depression only, and control children on individual, academic, social, and family…

  10. Visual Skills and Chinese Reading Acquisition: A Meta-Analysis of Correlation Evidence

    ERIC Educational Resources Information Center

    Yang, Ling-Yan; Guo, Jian-Peng; Richman, Lynn C.; Schmidt, Frank L.; Gerken, Kathryn C.; Ding, Yi

    2013-01-01

    This paper used meta-analysis to synthesize the relation between visual skills and Chinese reading acquisition based on the empirical results from 34 studies published from 1991 to 2011. We obtained 234 correlation coefficients from 64 independent samples, with a total of 5,395 participants. The meta-analysis revealed that visual skills as a…

  11. Exponential Correlation of IQ and the Wealth of Nations

    ERIC Educational Resources Information Center

    Dickerson, Richard E.

    2006-01-01

    Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…

  12. Classical Item Analysis Using Latent Variable Modeling: A Note on a Direct Evaluation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2011-01-01

    A directly applicable latent variable modeling procedure for classical item analysis is outlined. The method allows one to point and interval estimate item difficulty, item correlations, and item-total correlations for composites consisting of categorical items. The approach is readily employed in empirical research and as a by-product permits…

  13. Correlates of parent-youth discordance about youth-witnessed violence: a brief report.

    PubMed

    Lewis, Terri; Thompson, Richard; Kotch, Jonathan B; Proctor, Laura J; Litrownik, Alan J; English, Diana J; Runyan, Desmond K; Wiley, Tisha R; Dubowitz, Howard

    2013-01-01

    Studies have consistently demonstrated a lack of agreement between youth and parent reports regarding youth-witnessed violence (YWV). However, little empirical investigation has been conducted on the correlates of disagreement. Concordance between youth and parents about YWV was examined in 766 parent-youth dyads from the Longitudinal Studies of Child Abuse and Neglect (LONGSCAN). Results showed that significantly more youth (42%) than parents (15%) reported YWV. Among the dyads in which at least one informant reported YWV (N = 344), we assessed whether youth delinquency, parental monitoring, parent-child relationship quality, history of child maltreatment, income, and parental depression were predictive of parent-youth concordance. Findings indicated that youth engagement in delinquent activities was higher in the groups in which the youth reported violence exposure. More empirical study is needed to assess correlates of agreement in high-risk youth to better inform associations found between exposures and outcomes as well as practice and policy for violence exposed youth.

  14. Why do generic drugs fail to achieve an adequate market share in Greece? Empirical findings and policy suggestions.

    PubMed

    Balasopoulos, T; Charonis, A; Athanasakis, K; Kyriopoulos, J; Pavi, E

    2017-03-01

    Since 2010, the memoranda of understanding were implemented in Greece as a measure of fiscal adjustment. Public pharmaceutical expenditure was one of the main focuses of this implementation. Numerous policies, targeted on pharma spending, reduced the pharmaceutical budget by 60.5%. Yet, generics' penetration in Greece remained among the lowest among OECD countries. This study aims to highlight the factors that affect the perceptions of the population on generic drugs and to suggest effective policy measures. The empirical analysis is based on a national cross-sectional survey that was conducted through a sample of 2003 individuals, representative of the general population. Two ordinal logistic regression models were constructed in order to identify the determinants that affect the respondents' beliefs on the safety and the effectiveness of generic drugs. The empirical findings presented a positive and statistically significant correlation with income, bill payment difficulties, safety and effectiveness of drugs, prescription and dispensing preferences and the views toward pharmaceutical companies. Also, age and trust toward medical community have a positive and statistically significant correlation with the perception on the safety of generic drugs. Policy interventions are suggested on the bases of the empirical results on 3 major categories; (a) information campaigns, (b) incentives to doctors and pharmacists and (c) to strengthen the bioequivalence control framework and the dissemination of results. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Merger-driven evolution of the effective stellar initial mass function of massive early-type galaxies

    NASA Astrophysics Data System (ADS)

    Sonnenfeld, Alessandro; Nipoti, Carlo; Treu, Tommaso

    2017-02-01

    The stellar initial mass function (IMF) of early-type galaxies is the combination of the IMF of the stellar population formed in situ and that of accreted stellar populations. Using as an observable the effective IMF αIMF, defined as the ratio between the true stellar mass of a galaxy and the stellar mass inferred assuming a Salpeter IMF, we present a theoretical model for its evolution as a result of dry mergers. We use a simple dry-merger evolution model, based on cosmological N-body simulations, together with empirically motivated prescriptions for the IMF to make predictions on how the effective IMF of massive early-type galaxies changes from z = 2 to z = 0. We find that the IMF normalization of individual galaxies becomes lighter with time. At fixed velocity dispersion, αIMF is predicted to be constant with redshift. Current dynamical constraints on the evolution of the IMF are in slight tension with this prediction, even though systematic uncertainties, including the effect of radial gradients in the IMF, prevent a conclusive statement. The correlation of αIMF with stellar mass becomes shallower with time, while the correlation between αIMF and velocity dispersion is mostly preserved by dry mergers. We also find that dry mergers can mix the dependence of the IMF on stellar mass and velocity dispersion, making it challenging to infer, from z = 0 observations of global galactic properties, what is the quantity that is originally coupled with the IMF.

  16. Cyclic voltammetry deposition of copper nanostructure on MWCNTs modified pencil graphite electrode: An ultra-sensitive hydrazine sensor.

    PubMed

    Heydari, Hamid; Gholivand, Mohammad B; Abdolmaleki, Abbas

    2016-09-01

    In this study, Copper (Cu) nanostructures (CuNS) were electrochemically deposited on a film of multiwall carbon nanotubes (MWCNTs) modified pencil graphite electrode (MWCNTs/PGE) by cyclic voltammetry method to fabricate a CuNS-MWCNTs composite sensor (CuNS-MWCNT/PGE) for hydrazine detection. Scanning electron microscopy (SEM) and Energy-dispersive X-ray spectroscopy (EDX) were used for the characterization of CuNS on the MWCNTs matrix. The composite of CuNS-MWCNTs was characterized with cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). The preliminary studies showed that the proposed sensor have a synergistic electrocatalytic activity for the oxidation of hydrazine in phosphate buffer. The catalytic currents of square wave voltammetry had a linear correlation with the hydrazine concentration in the range of 0.1 to 800μM with a low detection limit of 70nM. Moreover, the amperometric oxidation current exhibited a linear correlation with hydrazine concentration in the concentration range of 50-800μM with the detection limit of 4.3μM. The proposed electrode was used for the determination of hydrazine in real samples and the results were promising. Empirical results also indicated that the sensor had good reproducibility, long-term stability, and the response of the sensor to hydrazine was free from interferences. Moreover, the proposed sensor benefits from simple preparation, low cost, outstanding sensitivity, selectivity, and reproducibility for hydrazine determination. Copyright © 2016. Published by Elsevier B.V.

  17. Record statistics of financial time series and geometric random walks

    NASA Astrophysics Data System (ADS)

    Sabir, Behlool; Santhanam, M. S.

    2014-09-01

    The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.

  18. Using brain stimulation to disentangle neural correlates of conscious vision

    PubMed Central

    de Graaf, Tom A.; Sack, Alexander T.

    2014-01-01

    Research into the neural correlates of consciousness (NCCs) has blossomed, due to the advent of new and increasingly sophisticated brain research tools. Neuroimaging has uncovered a variety of brain processes that relate to conscious perception, obtained in a range of experimental paradigms. But methods such as functional magnetic resonance imaging or electroencephalography do not always afford inference on the functional role these brain processes play in conscious vision. Such empirical NCCs could reflect neural prerequisites, neural consequences, or neural substrates of a conscious experience. Here, we take a closer look at the use of non-invasive brain stimulation (NIBS) techniques in this context. We discuss and review how NIBS methodology can enlighten our understanding of brain mechanisms underlying conscious vision by disentangling the empirical NCCs. PMID:25295015

  19. Chronic nutrient enrichment increases prevalence and severity of coral disease and bleaching.

    PubMed

    Vega Thurber, Rebecca L; Burkepile, Deron E; Fuchs, Corinne; Shantz, Andrew A; McMinds, Ryan; Zaneveld, Jesse R

    2014-02-01

    Nutrient loading is one of the strongest drivers of marine habitat degradation. Yet, the link between nutrients and disease epizootics in marine organisms is often tenuous and supported only by correlative data. Here, we present experimental evidence that chronic nutrient exposure leads to increases in both disease prevalence and severity and coral bleaching in scleractinian corals, the major habitat-forming organisms in tropical reefs. Over 3 years, from June 2009 to June 2012, we continuously exposed areas of a coral reef to elevated levels of nitrogen and phosphorus. At the termination of the enrichment, we surveyed over 1200 scleractinian corals for signs of disease or bleaching. Siderastrea siderea corals within enrichment plots had a twofold increase in both the prevalence and severity of disease compared with corals in unenriched control plots. In addition, elevated nutrient loading increased coral bleaching; Agaricia spp. of corals exposed to nutrients suffered a 3.5-fold increase in bleaching frequency relative to control corals, providing empirical support for a hypothesized link between nutrient loading and bleaching-induced coral declines. However, 1 year later, after nutrient enrichment had been terminated for 10 months, there were no differences in coral disease or coral bleaching prevalence between the previously enriched and control treatments. Given that our experimental enrichments were well within the ranges of ambient nutrient concentrations found on many degraded reefs worldwide, these data provide strong empirical support to the idea that coastal nutrient loading is one of the major factors contributing to the increasing levels of both coral disease and coral bleaching. Yet, these data also suggest that simple improvements to water quality may be an effective way to mitigate some coral disease epizootics and the corresponding loss of coral cover in the future. © 2013 John Wiley & Sons Ltd.

  20. Global Langevin model of multidimensional biomolecular dynamics.

    PubMed

    Schaudinnus, Norbert; Lickert, Benjamin; Biswas, Mithun; Stock, Gerhard

    2016-11-14

    Molecular dynamics simulations of biomolecular processes are often discussed in terms of diffusive motion on a low-dimensional free energy landscape F(). To provide a theoretical basis for this interpretation, one may invoke the system-bath ansatz á la Zwanzig. That is, by assuming a time scale separation between the slow motion along the system coordinate x and the fast fluctuations of the bath, a memory-free Langevin equation can be derived that describes the system's motion on the free energy landscape F(), which is damped by a friction field and driven by a stochastic force that is related to the friction via the fluctuation-dissipation theorem. While the theoretical formulation of Zwanzig typically assumes a highly idealized form of the bath Hamiltonian and the system-bath coupling, one would like to extend the approach to realistic data-based biomolecular systems. Here a practical method is proposed to construct an analytically defined global model of structural dynamics. Given a molecular dynamics simulation and adequate collective coordinates, the approach employs an "empirical valence bond"-type model which is suitable to represent multidimensional free energy landscapes as well as an approximate description of the friction field. Adopting alanine dipeptide and a three-dimensional model of heptaalanine as simple examples, the resulting Langevin model is shown to reproduce the results of the underlying all-atom simulations. Because the Langevin equation can also be shown to satisfy the underlying assumptions of the theory (such as a delta-correlated Gaussian-distributed noise), the global model provides a correct, albeit empirical, realization of Zwanzig's formulation. As an application, the model can be used to investigate the dependence of the system on parameter changes and to predict the effect of site-selective mutations on the dynamics.

  1. Global Langevin model of multidimensional biomolecular dynamics

    NASA Astrophysics Data System (ADS)

    Schaudinnus, Norbert; Lickert, Benjamin; Biswas, Mithun; Stock, Gerhard

    2016-11-01

    Molecular dynamics simulations of biomolecular processes are often discussed in terms of diffusive motion on a low-dimensional free energy landscape F ( 𝒙 ) . To provide a theoretical basis for this interpretation, one may invoke the system-bath ansatz á la Zwanzig. That is, by assuming a time scale separation between the slow motion along the system coordinate x and the fast fluctuations of the bath, a memory-free Langevin equation can be derived that describes the system's motion on the free energy landscape F ( 𝒙 ) , which is damped by a friction field and driven by a stochastic force that is related to the friction via the fluctuation-dissipation theorem. While the theoretical formulation of Zwanzig typically assumes a highly idealized form of the bath Hamiltonian and the system-bath coupling, one would like to extend the approach to realistic data-based biomolecular systems. Here a practical method is proposed to construct an analytically defined global model of structural dynamics. Given a molecular dynamics simulation and adequate collective coordinates, the approach employs an "empirical valence bond"-type model which is suitable to represent multidimensional free energy landscapes as well as an approximate description of the friction field. Adopting alanine dipeptide and a three-dimensional model of heptaalanine as simple examples, the resulting Langevin model is shown to reproduce the results of the underlying all-atom simulations. Because the Langevin equation can also be shown to satisfy the underlying assumptions of the theory (such as a delta-correlated Gaussian-distributed noise), the global model provides a correct, albeit empirical, realization of Zwanzig's formulation. As an application, the model can be used to investigate the dependence of the system on parameter changes and to predict the effect of site-selective mutations on the dynamics.

  2. Fluid dynamics of moving fish in a two-dimensional multiparticle collision dynamics model

    NASA Astrophysics Data System (ADS)

    Reid, Daniel A. P.; Hildenbrandt, H.; Padding, J. T.; Hemelrijk, C. K.

    2012-02-01

    The fluid dynamics of animal locomotion, such as that of an undulating fish, are of great interest to both biologists and engineers. However, experimentally studying these fluid dynamics is difficult and time consuming. Model studies can be of great help because of their simpler and more detailed analysis. Their insights may guide empirical work. Particularly the recently introduced multiparticle collision dynamics method may be suitable for the study of moving organisms because it is computationally fast, simple to implement, and has a continuous representation of space. As regards the study of hydrodynamics of moving organisms, the method has only been applied at low Reynolds numbers (below 120) for soft, permeable bodies, and static fishlike shapes. In the present paper we use it to study the hydrodynamics of an undulating fish at Reynolds numbers 1100-1500, after confirming its performance for a moving insect wing at Reynolds number 75. We measure (1) drag, thrust, and lift forces, (2) swimming efficiency and spatial structure of the wake, and (3) distribution of forces along the fish body. We confirm the resemblance between the simulated undulating fish and empirical data. In contrast to theoretical predictions, our model shows that for steadily undulating fish, thrust is produced by the rear 2/3 of the body and that the slip ratio U/V (with U the forward swimming speed and V the rearward speed of the body wave) correlates negatively (instead of positively) with the actual Froude efficiency of swimming. Besides, we show that the common practice of modeling individuals while constraining their sideways acceleration causes them to resemble unconstrained fish with a higher tailbeat frequency.

  3. Characterizing and modelling river channel migration rates at a regional scale: Case study of south-east France.

    PubMed

    Alber, Adrien; Piégay, Hervé

    2017-11-01

    An increased awareness by river managers of the importance of river channel migration to sediment dynamics, habitat complexity and other ecosystem functions has led to an advance in the science and practice of identifying, protecting or restoring specific erodible corridors across which rivers are free to migrate. One current challenge is the application of these watershed-specific goals at the regional planning scales (e.g., the European Water Framework Directive). This study provides a GIS-based spatial analysis of the channel migration rates at the regional-scale. As a case study, 99 reaches were sampled in the French part of the Rhône Basin and nearby tributaries of the Mediterranean Sea (111,300 km 2 ). We explored the spatial correlation between the channel migration rate and a set of simple variables (e.g., watershed area, channel slope, stream power, active channel width). We found that the spatial variability of the channel migration rates was primary explained by the gross stream power (R 2  = 0.48) and more surprisingly by the active channel width scaled by the watershed area. The relationship between the absolute migration rate and the gross stream power is generally consistent with the published empirical models for freely meandering rivers, whereas it is less significant for the multi-thread reaches. The discussion focused on methodological constraints for a regional-scale modelling of the migration rates, and the interpretation of the empirical models. We hypothesize that the active channel width scaled by the watershed area is a surrogate for the sediment supply which may be a more critical factor than the bank resistance for explaining the regional-scale variability of the migration rates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Estimated correlation matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2004-11-01

    Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs consistently well in all the investigated cases. Based on this experience, we believe that our simulation-based approach can also be useful for the systematic investigation of several related problems of current interest in finance.

  5. Less can be more: How to make operations more flexible and robust with fewer resources

    NASA Astrophysics Data System (ADS)

    Haksöz, ćaǧrı; Katsikopoulos, Konstantinos; Gigerenzer, Gerd

    2018-06-01

    We review empirical evidence from practice and general theoretical conditions, under which simple rules of thumb can help to make operations flexible and robust. An operation is flexible when it responds adaptively to adverse events such as natural disasters; an operation is robust when it is less affected by adverse events in the first place. We illustrate the relationship between flexibility and robustness in the context of supply chain risk. In addition to increasing flexibility and robustness, simple rules simultaneously reduce the need for resources such as time, money, information, and computation. We illustrate the simple-rules approach with an easy-to-use graphical aid for diagnosing and managing supply chain risk. More generally, we recommend a four-step process for determining the amount of resources that decision makers should invest in so as to increase flexibility and robustness.

  6. Ultrametric distribution of culture vectors in an extended Axelrod model of cultural dissemination.

    PubMed

    Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael

    2014-05-02

    The Axelrod model of cultural diffusion is an apparently simple model that is capable of complex behaviour. A recent work used a real-world dataset of opinions as initial conditions, demonstrating the effects of the ultrametric distribution of empirical opinion vectors in promoting cultural diversity in the model. Here we quantify the degree of ultrametricity of the initial culture vectors and investigate the effect of varying degrees of ultrametricity on the absorbing state of both a simple and extended model. Unlike the simple model, ultrametricity alone is not sufficient to sustain long-term diversity in the extended Axelrod model; rather, the initial conditions must also have sufficiently large variance in intervector distances. Further, we find that a scheme for evolving synthetic opinion vectors from cultural "prototypes" shows the same behaviour as real opinion data in maintaining cultural diversity in the extended model; whereas neutral evolution of cultural vectors does not.

  7. Ultrametric distribution of culture vectors in an extended Axelrod model of cultural dissemination

    NASA Astrophysics Data System (ADS)

    Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael

    2014-05-01

    The Axelrod model of cultural diffusion is an apparently simple model that is capable of complex behaviour. A recent work used a real-world dataset of opinions as initial conditions, demonstrating the effects of the ultrametric distribution of empirical opinion vectors in promoting cultural diversity in the model. Here we quantify the degree of ultrametricity of the initial culture vectors and investigate the effect of varying degrees of ultrametricity on the absorbing state of both a simple and extended model. Unlike the simple model, ultrametricity alone is not sufficient to sustain long-term diversity in the extended Axelrod model; rather, the initial conditions must also have sufficiently large variance in intervector distances. Further, we find that a scheme for evolving synthetic opinion vectors from cultural ``prototypes'' shows the same behaviour as real opinion data in maintaining cultural diversity in the extended model; whereas neutral evolution of cultural vectors does not.

  8. Simple artificial neural networks that match probability and exploit and explore when confronting a multiarmed bandit.

    PubMed

    Dawson, Michael R W; Dupuis, Brian; Spetch, Marcia L; Kelly, Debbie M

    2009-08-01

    The matching law (Herrnstein 1961) states that response rates become proportional to reinforcement rates; this is related to the empirical phenomenon called probability matching (Vulkan 2000). Here, we show that a simple artificial neural network generates responses consistent with probability matching. This behavior was then used to create an operant procedure for network learning. We use the multiarmed bandit (Gittins 1989), a classic problem of choice behavior, to illustrate that operant training balances exploiting the bandit arm expected to pay off most frequently with exploring other arms. Perceptrons provide a medium for relating results from neural networks, genetic algorithms, animal learning, contingency theory, reinforcement learning, and theories of choice.

  9. Summary of methods for calculating dynamic lateral stability and response and for estimating aerodynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Campbell, John P; Mckinney, Marion O

    1952-01-01

    A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.

  10. Directions for Optimization of Photosynthetic Carbon Fixation: RuBisCO's Efficiency May Not Be So Constrained After All.

    PubMed

    Cummins, Peter L; Kannappan, Babu; Gready, Jill E

    2018-01-01

    The ubiquitous enzyme Ribulose 1,5-bisphosphate carboxylase-oxygenase (RuBisCO) fixes atmospheric carbon dioxide within the Calvin-Benson cycle that is utilized by most photosynthetic organisms. Despite this central role, RuBisCO's efficiency surprisingly struggles, with both a very slow turnover rate to products and also impaired substrate specificity, features that have long been an enigma as it would be assumed that its efficiency was under strong evolutionary pressure. RuBisCO's substrate specificity is compromised as it catalyzes a side-fixation reaction with atmospheric oxygen; empirical kinetic results show a trend to tradeoff between relative specificity and low catalytic turnover rate. Although the dominant hypothesis has been that the active-site chemistry constrains the enzyme's evolution, a more recent study on RuBisCO stability and adaptability has implicated competing selection pressures. Elucidating these constraints is crucial for directing future research on improving photosynthesis, as the current literature casts doubt on the potential effectiveness of site-directed mutagenesis to improve RuBisCO's efficiency. Here we use regression analysis to quantify the relationships between kinetic parameters obtained from empirical data sets spanning a wide evolutionary range of RuBisCOs. Most significantly we found that the rate constant for dissociation of CO 2 from the enzyme complex was much higher than previous estimates and comparable with the corresponding catalytic rate constant. Observed trends between relative specificity and turnover rate can be expressed as the product of negative and positive correlation factors. This provides an explanation in simple kinetic terms of both the natural variation of relative specificity as well as that obtained by reported site-directed mutagenesis results. We demonstrate that the kinetic behaviour shows a lesser rather than more constrained RuBisCO, consistent with growing empirical evidence of higher variability in relative specificity. In summary our analysis supports an explanation for the origin of the tradeoff between specificity and turnover as due to competition between protein stability and activity, rather than constraints between rate constants imposed by the underlying chemistry. Our analysis suggests that simultaneous improvement in both specificity and turnover rate of RuBisCO is possible.

  11. Empirical Relationships Among Magnitude and Surface Rupture Characteristics of Strike-Slip Faults: Effect of Fault (System) Geometry and Observation Location, Dervided From Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Zielke, O.; Arrowsmith, J.

    2007-12-01

    In order to determine the magnitude of pre-historic earthquakes, surface rupture length, average and maximum surface displacement are utilized, assuming that an earthquake of a specific size will cause surface features of correlated size. The well known Wells and Coppersmith (1994) paper and other studies defined empirical relationships between these and other parameters, based on historic events with independently known magnitude and rupture characteristics. However, these relationships show relatively large standard deviations and they are based only on a small number of events. To improve these first-order empirical relationships, the observation location relative to the rupture extent within the regional tectonic framework should be accounted for. This however cannot be done based on natural seismicity because of the limited size of datasets on large earthquakes. We have developed the numerical model FIMozFric, based on derivations by Okada (1992) to create synthetic seismic records for a given fault or fault system under the influence of either slip- or stress boundary conditions. Our model features A) the introduction of an upper and lower aseismic zone, B) a simple Coulomb friction law, C) bulk parameters simulating fault heterogeneity, and D) a fault interaction algorithm handling the large number of fault patches (typically 5,000-10,000). The joint implementation of these features produces well behaved synthetic seismic catalogs and realistic relationships among magnitude and surface rupture characteristics which are well within the error of the results by Wells and Coppersmith (1994). Furthermore, we use the synthetic seismic records to show that the relationships between magntiude and rupture characteristics are a function of the observation location within the regional tectonic framework. The model presented here can to provide paleoseismologists with a tool to improve magnitude estimates from surface rupture characteristics, by incorporating the regional and local structural context which can be determined in the field: Assuming a paleoseismologist measures the offset along a fault caused by an earthquake, our model can be used to determine the probability distribution of magnitudes which are capable of producing the observed offset, accounting for regional tectonic setting and observation location.

  12. Empirical Modeling of Plant Gas Fluxes in Controlled Environments

    NASA Technical Reports Server (NTRS)

    Cornett, Jessie David

    1994-01-01

    As humans extend their reach beyond the earth, bioregenerative life support systems must replace the resupply and physical/chemical systems now used. The Controlled Ecological Life Support System (CELSS) will utilize plants to recycle the carbon dioxide (CO2) and excrement produced by humans and return oxygen (O2), purified water and food. CELSS design requires knowledge of gas flux levels for net photosynthesis (PS(sub n)), dark respiration (R(sub d)) and evapotranspiration (ET). Full season gas flux data regarding these processes for wheat (Triticum aestivum), soybean (Glycine max) and rice (Oryza sativa) from published sources were used to develop empirical models. Univariate models relating crop age (days after planting) and gas flux were fit by simple regression. Models are either high order (5th to 8th) or more complex polynomials whose curves describe crop development characteristics. The models provide good estimates of gas flux maxima, but are of limited utility. To broaden the applicability, data were transformed to dimensionless or correlation formats and, again, fit by regression. Polynomials, similar to those in the initial effort, were selected as the most appropriate models. These models indicate that, within a cultivar, gas flux patterns appear remarkably similar prior to maximum flux, but exhibit considerable variation beyond this point. This suggests that more broadly applicable models of plant gas flux are feasible, but univariate models defining gas flux as a function of crop age are too simplistic. Multivariate models using CO2 and crop age were fit for PS(sub n), and R(sub d) by multiple regression. In each case, the selected model is a subset of a full third order model with all possible interactions. These models are improvements over the univariate models because they incorporate more than the single factor, crop age, as the primary variable governing gas flux. They are still limited, however, by their reliance on the other environmental conditions under which the original data were collected. Three-dimensional plots representing the response surface of each model are included. Suitability of using empirical models to generate engineering design estimates is discussed. Recommendations for the use of more complex multivariate models to increase versatility are included.

  13. POGO-FAN: Remarkable Empirical Indicators for the Local Chemical Production of Smog- Ozone and NOx-Sensitivity of Air Parcels

    NASA Astrophysics Data System (ADS)

    Chatfield, R. B.; Browell, E. V.; Brune, W. H.; Crawford, J. H.; Esswein, R.; Fried, A.; Olson, J. R.; Shetter, R. E.; Singh, H. B.

    2006-12-01

    We propose and evaluate two related and surprisingly simple empirical estimators for the local chemical production term for photochemical ozone; each uses two moderate-technology chemical measurements and a measurement of ultraviolet light. We nickname the techniques POGO-FAN: Production of Ozone by Gauging Oxidation: Formaldehyde and NO. (1) A non-linear function of a single three-factor index-variable, j (HCHO=>rads) [HCHO] [NO] seems to provide a good estimator of the largest single term in the production of smog ozone, the HOO+NO term, over a very wide range of situations. (2) By considering empirical contour plots summarizing isopleths of HOO+NO using j (HCHO=>rads) [HCHO] and [NO] separately as coordinates, we provide a slightly more complex 2-d indicator of smog ozone production that additionally allows an estimate of the NOx-sensitivity or NOx-saturation (i.e., VOC-sensitivity) of sampled air parcels. ~85 to >90 % of the variance is explained. The correspondence to "EKMA" contour plots, estimating afternoon ozone based on morningtime organics and NOx mixes, is not coincidental. We utilize a broad set of urban plume, regionally polluted and cleaner NASA DC-8 PBL samples from the Intercontinental Transport Experiment-North America (INTEX-NA), in which each of the variables was measured, to help establish our relationship. The estimator is described in terms both both of asymptotic smog photochemistry theory; primarily this suggests appropriate statistical approaches which can capture some of the complex interrelations of lower-tropospheric smog mix through correlation of reactive mixture components. HCHO is not only an important source of HOO radicals, but it more important serves as a "gauge" of all photochemical processing of volatile organic compounds. It probably captures information related to coincident VOC sources of various compounds and parallels in photochemical processing. Constrained modeling of observed atmospheric concentrations suggests that the prime source of ozone from HOO+NO reaction and other peroxy radical ozone formation reactions (ROO+NO), thus all ozone production, are closely related. Additionally, modeling allows us to follow ozone production and NOx-sensitivity throughout the varying photolytic cycle.

  14. Local normalization: Uncovering correlations in non-stationary financial time series

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Guhr, Thomas

    2010-09-01

    The measurement of correlations between financial time series is of vital importance for risk management. In this paper we address an estimation error that stems from the non-stationarity of the time series. We put forward a method to rid the time series of local trends and variable volatility, while preserving cross-correlations. We test this method in a Monte Carlo simulation, and apply it to empirical data for the S&P 500 stocks.

  15. The Interpersonal Adaptiveness of Dispositional Guilt and Shame: A Meta-Analytic Investigation.

    PubMed

    Tignor, Stefanie M; Colvin, C Randall

    2017-06-01

    Despite decades of empirical research, conclusions regarding the adaptiveness of dispositional guilt and shame are mixed. We use meta-analysis to summarize the empirical literature and clarify these ambiguities. Specifically, we evaluate how guilt and shame are uniquely related to pro-social orientation and, in doing so, highlight the substantial yet under-acknowledged impact of researchers' methodological choices. A series of meta-analyses was conducted investigating the relationship between dispositional guilt (or shame) and pro-social orientation. Two main methodological moderators of interest were tested: test format (scenario vs. checklist) and statistical analysis (semi-partial vs. zero-order correlations). Among studies employing zero-order correlations, dispositional guilt was positively correlated with pro-social orientation (k = 63, Mr = .13, p < .001), whereas dispositional shame was negatively correlated, (k = 47, Mr = -.05, p = .07). Test format was a significant moderator for guilt studies only, with scenario measures producing significantly stronger effects. Semi-partial correlations resulted in significantly stronger effects among guilt and shame studies. Although dispositional guilt and shame are differentially related to pro-social orientation, such relationships depend largely on the methodological choices of the researcher, particularly in the case of guilt. Implications for the study of these traits are discussed. © 2016 Wiley Periodicals, Inc.

  16. Socio-demographic and academic correlates of clinical reasoning in a dental school in South Africa.

    PubMed

    Postma, T C; White, J G

    2017-02-01

    There are no empirical studies that describe factors that may influence the development of integrated clinical reasoning skills in dental education. Hence, this study examines the association between outcomes of clinical reasoning in relation with differences in instructional design and student factors. Progress test scores, including diagnostic and treatment planning scores, of fourth and fifth year dental students (2009-2011) at the University of Pretoria, South Africa served as the outcome measures in stepwise linear regression analyses. These scores were correlated with the instructional design (lecture-based teaching and learning (LBTL = 0) or case-based teaching and learning (CBTL = 1), students' grades in Oral Biology, indicators of socio-economic status (SES) and gender. CBTL showed an independent association with progress test scores. Oral Biology scores correlated with diagnostic component scores. Diagnostic component scores correlated with treatment planning scores in the fourth year of study but not in the fifth year of study. 'SES' correlated with progress test scores in year five only, while gender showed no correlation. The empirical evidence gathered in this study provides support for scaffolded inductive teaching and learning methods to develop clinical reasoning skills. Knowledge in Oral Biology and reading skills may be important attributes to develop to ensure that students are able to reason accurately in a clinical setting. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Comparison of Multidimensional Item Response Models: Multivariate Normal Ability Distributions versus Multivariate Polytomous Ability Distributions. Research Report. ETS RR-08-45

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan

    2008-01-01

    Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…

  18. Modeling contemporary climate profiles of whitebark pine (Pinus albicaulis) and predicting responses to global warming

    Treesearch

    Marcus V. Warwell; Gerald E. Rehfeldt; Nicholas L. Crookston

    2006-01-01

    The Random Forests multiple regression tree was used to develop an empirically-based bioclimate model for the distribution of Pinus albicaulis (whitebark pine) in western North America, latitudes 31° to 51° N and longitudes 102° to 125° W. Independent variables included 35 simple expressions of temperature and precipitation and their interactions....

  19. Quantifying Confidence in Model Predictions for Hypersonic Aircraft Structures

    DTIC Science & Technology

    2015-03-01

    of isolating calibrations of models in the network, segmented and simultaneous calibration are compared using the Kullback - Leibler ...value of θ. While not all test -statistics are as simple as measuring goodness or badness of fit , their directional interpretations tend to remain...data quite well, qualitatively. Quantitative goodness - of - fit tests are problematic because they assume a true empirical CDF is being tested or

  20. Two Sides of the Same Coin: U. S. "Residual" Inequality and the Gender Gap

    ERIC Educational Resources Information Center

    Bacolod, Marigee P.; Blum, Bernardo S.

    2010-01-01

    We show that the narrowing gender gap and the growth in earnings inequality are consistent with a simple model in which skills are heterogeneous, and the growth in skill prices has been particularly strong for skills with which women are well endowed. Empirical analysis of DOT, CPS, and NLSY79 data finds evidence to support this model. A large…

Top