Sample records for empirical correlation methods

  1. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    NASA Astrophysics Data System (ADS)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.


  2. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  3. 40 CFR Appendix C to Part 75 - Missing Data Estimation Procedures

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certification of a parametric, empirical, or process simulation method or model for calculating substitute data... available process simulation methods and models. 1.2Petition Requirements Continuously monitor, determine... desulfurization, a corresponding empirical correlation or process simulation parametric method using appropriate...

  4. Path integral for equities: Dynamic correlation and empirical analysis

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Cao, Yang; Lau, Ada; Tang, Pan

    2012-02-01

    This paper develops a model to describe the unequal time correlation between rate of returns of different stocks. A non-trivial fourth order derivative Lagrangian is defined to provide an unequal time propagator, which can be fitted to the market data. A calibration algorithm is designed to find the empirical parameters for this model and different de-noising methods are used to capture the signals concealed in the rate of return. The detailed results of this Gaussian model show that the different stocks can have strong correlation and the empirical unequal time correlator can be described by the model's propagator. This preliminary study provides a novel model for the correlator of different instruments at different times.

  5. Tracer kinetics of forearm endothelial function: comparison of an empirical method and a quantitative modeling technique.

    PubMed

    Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L

    2007-01-01

    Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.

  6. Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data

    PubMed Central

    Hu, Jianhua; Wang, Peng; Qu, Annie

    2014-01-01

    Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433

  7. Random matrix theory analysis of cross-correlations in the US stock market: Evidence from Pearson’s correlation coefficient and detrended cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan

    2013-09-01

    In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.

  8. Empirical Bayes method for reducing false discovery rates of correlation matrices with block diagonal structure.

    PubMed

    Pacini, Clare; Ajioka, James W; Micklem, Gos

    2017-04-12

    Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.

  9. The Philosophy, Theoretical Bases, and Implementation of the AHAAH Model for Evaluation of Hazard from Exposure to Intense Sounds

    DTIC Science & Technology

    2018-04-01

    empirical, external energy-damage correlation methods for evaluating hearing damage risk associated with impulsive noise exposure. AHAAH applies the...is validated against the measured results of human exposures to impulsive sounds, and unlike wholly empirical correlation approaches, AHAAH’s...a measured level (LAEQ8 of 85 dB). The approach in MIL-STD-1474E is very different. Previous standards tried to find a correlation between some

  10. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    NASA Astrophysics Data System (ADS)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  11. Analysis of Vibration and Noise of Construction Machinery Based on Ensemble Empirical Mode Decomposition and Spectral Correlation Analysis Method

    NASA Astrophysics Data System (ADS)

    Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan

    In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.

  12. An Empirical Bayes Approach to Spatial Analysis

    NASA Technical Reports Server (NTRS)

    Morris, C. N.; Kostal, H.

    1983-01-01

    Multi-channel LANDSAT data are collected in several passes over agricultural areas during the growing season. How empirical Bayes modeling can be used to develop crop identification and discrimination techniques that account for spatial correlation in such data is considered. The approach models the unobservable parameters and the data separately, hoping to take advantage of the fact that the bulk of spatial correlation lies in the parameter process. The problem is then framed in terms of estimating posterior probabilities of crop types for each spatial area. Some empirical Bayes spatial estimation methods are used to estimate the logits of these probabilities.

  13. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad

    2014-06-01

    Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  14. Optimum wall impedance for spinning modes: A correlation with mode cut-off ratio

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1978-01-01

    A correlating equation relating the optimum acoustic impedance for the wall lining of a circular duct to the acoustic mode cut-off ratio, is presented. The optimum impedance was correlated with cut-off ratio because the cut-off ratio appears to be the fundamental parameter governing the propagation of sound in the duct. Modes with similar cut-off ratios respond in a similar way to the acoustic liner. The correlation is a semi-empirical expression developed from an empirical modification of an equation originally derived from sound propagation theory in a thin boundary layer. This correlating equation represents a part of a simplified liner design method, based upon modal cut-off ratio, for multimodal noise propagation.

  15. Prediction of Very High Reynolds Number Compressible Skin Friction

    NASA Technical Reports Server (NTRS)

    Carlson, John R.

    1998-01-01

    Flat plate skin friction calculations over a range of Mach numbers from 0.4 to 3.5 at Reynolds numbers from 16 million to 492 million using a Navier Stokes method with advanced turbulence modeling are compared with incompressible skin friction coefficient correlations. The semi-empirical correlation theories of van Driest; Cope; Winkler and Cha; and Sommer and Short T' are used to transform the predicted skin friction coefficients of solutions using two algebraic Reynolds stress turbulence models in the Navier-Stokes method PAB3D. In general, the predicted skin friction coefficients scaled well with each reference temperature theory though, overall the theory by Sommer and Short appeared to best collapse the predicted coefficients. At the lower Reynolds number 3 to 30 million, both the Girimaji and Shih, Zhu and Lumley turbulence models predicted skin-friction coefficients within 2% of the semi-empirical correlation skin friction coefficients. At the higher Reynolds numbers of 100 to 500 million, the turbulence models by Shih, Zhu and Lumley and Girimaji predicted coefficients that were 6% less and 10% greater, respectively, than the semi-empirical coefficients.

  16. A generalization of random matrix theory and its application to statistical physics.

    PubMed

    Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H

    2017-02-01

    To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.

  17. Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference

    USGS Publications Warehouse

    Olea, R.A.; Pardo-Iguzquiza, E.

    2011-01-01

    The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.

  18. An empirical description of the dispersion of 5th and 95th percentiles in worldwide anthropometric data applied to estimating accommodation with unknown correlation values.

    PubMed

    Albin, Thomas J; Vink, Peter

    2015-01-01

    Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.

  19. Local normalization: Uncovering correlations in non-stationary financial time series

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Guhr, Thomas

    2010-09-01

    The measurement of correlations between financial time series is of vital importance for risk management. In this paper we address an estimation error that stems from the non-stationarity of the time series. We put forward a method to rid the time series of local trends and variable volatility, while preserving cross-correlations. We test this method in a Monte Carlo simulation, and apply it to empirical data for the S&P 500 stocks.

  20. Development of an Empirical Methods for Predicting Jet Mixing Noise of Cold Flow Rectangular Jets

    NASA Technical Reports Server (NTRS)

    Russell, James W.

    1999-01-01

    This report presents an empirical method for predicting the jet mixing noise levels of cold flow rectangular jets. The report presents a detailed analysis of the methodology used in development of the prediction method. The empirical correlations used are based on narrow band acoustic data for cold flow rectangular model nozzle tests conducted in the NASA Langley Jet Noise Laboratory. There were 20 separate nozzle test operating conditions. For each operating condition 60 Hz bandwidth microphone measurements were made over a frequency range from 0 to 60,000 Hz. Measurements were performed at 16 polar directivity angles ranging from 45 degrees to 157.5 degrees. At each polar directivity angle, measurements were made at 9 azimuth directivity angles. The report shows the methods employed to remove screech tones and shock noise from the data in order to obtain the jet mixing noise component. The jet mixing noise was defined in terms of one third octave band spectral content, polar and azimuth directivity, and overall power level. Empirical correlations were performed over the range of test conditions to define each of these jet mixing noise parameters as a function of aspect ratio, jet velocity, and polar and azimuth directivity angles. The report presents the method for predicting the overall power level, the average polar directivity, the azimuth directivity and the location and shape of the spectra for jet mixing noise of cold flow rectangular jets.

  1. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  2. Empirical source strength correlations for rans-based acoustic analogy methods

    NASA Astrophysics Data System (ADS)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate that there are underlying flaws in JeNo's ability to predict the behavior of a hot jet's acoustic signature at certain rear observer angles, and that this correlation correction is not able to correct these flaws.

  3. Kolmogorov-Smirnov test for spatially correlated data

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  4. Evaluation of Phytoavailability of Heavy Metals to Chinese Cabbage (Brassica chinensis L.) in Rural Soils

    PubMed Central

    Hseu, Zeng-Yei; Zehetner, Franz

    2014-01-01

    This study compared the extractability of Cd, Cu, Ni, Pb, and Zn by 8 extraction protocols for 22 representative rural soils in Taiwan and correlated the extractable amounts of the metals with their uptake by Chinese cabbage for developing an empirical model to predict metal phytoavailability based on soil properties. Chemical agents in these protocols included dilute acids, neutral salts, and chelating agents, in addition to water and the Rhizon soil solution sampler. The highest concentrations of extractable metals were observed in the HCl extraction and the lowest in the Rhizon sampling method. The linear correlation coefficients between extractable metals in soil pools and metals in shoots were higher than those in roots. Correlations between extractable metal concentrations and soil properties were variable; soil pH, clay content, total metal content, and extractable metal concentration were considered together to simulate their combined effects on crop uptake by an empirical model. This combination improved the correlations to different extents for different extraction methods, particularly for Pb, for which the extractable amounts with any extraction protocol did not correlate with crop uptake by simple correlation analysis. PMID:25295297

  5. Fluorescence background removal method for biological Raman spectroscopy based on empirical mode decomposition.

    PubMed

    Leon-Bejarano, Maritza; Dorantes-Mendez, Guadalupe; Ramirez-Elias, Miguel; Mendez, Martin O; Alba, Alfonso; Rodriguez-Leyva, Ildefonso; Jimenez, M

    2016-08-01

    Raman spectroscopy of biological tissue presents fluorescence background, an undesirable effect that generates false Raman intensities. This paper proposes the application of the Empirical Mode Decomposition (EMD) method to baseline correction. EMD is a suitable approach since it is an adaptive signal processing method for nonlinear and non-stationary signal analysis that does not require parameters selection such as polynomial methods. EMD performance was assessed through synthetic Raman spectra with different signal to noise ratio (SNR). The correlation coefficient between synthetic Raman spectra and the recovered one after EMD denoising was higher than 0.92. Additionally, twenty Raman spectra from skin were used to evaluate EMD performance and the results were compared with Vancouver Raman algorithm (VRA). The comparison resulted in a mean square error (MSE) of 0.001554. High correlation coefficient using synthetic spectra and low MSE in the comparison between EMD and VRA suggest that EMD could be an effective method to remove fluorescence background in biological Raman spectra.

  6. Phase correlation of foreign exchange time series

    NASA Astrophysics Data System (ADS)

    Wu, Ming-Chya

    2007-03-01

    Correlation of foreign exchange rates in currency markets is investigated based on the empirical data of USD/DEM and USD/JPY exchange rates for a period from February 1 1986 to December 31 1996. The return of exchange time series is first decomposed into a number of intrinsic mode functions (IMFs) by the empirical mode decomposition method. The instantaneous phases of the resultant IMFs calculated by the Hilbert transform are then used to characterize the behaviors of pricing transmissions, and the correlation is probed by measuring the phase differences between two IMFs in the same order. From the distribution of phase differences, our results show explicitly that the correlations are stronger in daily time scale than in longer time scales. The demonstration for the correlations in periods of 1986-1989 and 1990-1993 indicates two exchange rates in the former period were more correlated than in the latter period. The result is consistent with the observations from the cross-correlation calculation.

  7. Study of Gender Differences in Performance at the U.S. Naval Academy and U.S. Coast Guard Academy

    DTIC Science & Technology

    2005-06-01

    teacher preparation. By using both qualitative and quantitative methods for pre-service teachers, Kelly concludes that most teachers could not identify...Engineering MATH/SCIENCE Marine and Environmental Sciences Math and Computer Science Operations Research SOCIAL SCIENCE Government...Tabachnik and Findell, 2001). Correlational research is often a good precursor to answering other questions by empirical methods . Correlations measure the

  8. A comparison of high-frequency cross-correlation measures

    NASA Astrophysics Data System (ADS)

    Precup, Ovidiu V.; Iori, Giulia

    2004-12-01

    On a high-frequency scale the time series are not homogeneous, therefore standard correlation measures cannot be directly applied to the raw data. There are two ways to deal with this problem. The time series can be homogenised through an interpolation method (An Introduction to High-Frequency Finance, Academic Press, NY, 2001) (linear or previous tick) and then the Pearson correlation statistic computed. Recently, methods that can handle raw non-synchronous time series have been developed (Int. J. Theor. Appl. Finance 6(1) (2003) 87; J. Empirical Finance 4 (1997) 259). This paper compares two traditional methods that use interpolation with an alternative method applied directly to the actual time series.

  9. Joint multifractal analysis based on the partition function approach: analytical analysis, numerical simulation and empirical application

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Jie; Jiang, Zhi-Qiang; Gu, Gao-Feng; Xiong, Xiong; Zhou, Wei-Xing

    2015-10-01

    Many complex systems generate multifractal time series which are long-range cross-correlated. Numerous methods have been proposed to characterize the multifractal nature of these long-range cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities.

  10. The limitations of simple gene set enrichment analysis assuming gene independence.

    PubMed

    Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P

    2016-02-01

    Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.

  11. Predicting the effects of magnesium oxide nanoparticles and temperature on the thermal conductivity of water using artificial neural network and experimental data

    NASA Astrophysics Data System (ADS)

    Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid

    2017-03-01

    The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.

  12. Correlation of refrigerant mass flow rate through adiabatic capillary tubes using mixture refrigerant carbondioxide and ethane for low temperature applications

    NASA Astrophysics Data System (ADS)

    Nasruddin, Syaka, Darwin R. B.; Alhamid, M. Idrus

    2012-06-01

    Various binary mixtures of carbon dioxide and hydrocarbons, especially propane or ethane, as alternative natural refrigerants to Chlorofluorocarbons (CFCs) or Hydro fluorocarbons (HFCs) are presented in this paper. Their environmental performance is friendly, with an ozone depletion potential (ODP) of zero and Global-warming potential (GWP) smaller than 20. The capillary tube performance for the alternative refrigerant HFC HCand mixed refrigerants have been widely studied. However, studies that discuss the performance of the capillary tube to a mixture of natural refrigerants, in particular a mixture of azeotrope carbon dioxide and ethane is still undeveloped. A method of empirical correlation to determine the mass flow rate and pipe length has an important role in the design of the capillary tube for industrial refrigeration. Based on the variables that effect the rate of mass flow of refrigerant in the capillary tube, the Buckingham Pi theorem formulated eight non-dimensional parameters to be developed into an empirical equations correlation. Furthermore, non-linear regression analysis used to determine the co-efficiency and exponent of this empirical correlation based on experimental verification of the results database.

  13. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  14. Artifact interactions retard technological improvement: An empirical study

    PubMed Central

    Magee, Christopher L.

    2017-01-01

    Empirical research has shown performance improvement of many different technological domains occurs exponentially but with widely varying improvement rates. What causes some technologies to improve faster than others do? Previous quantitative modeling research has identified artifact interactions, where a design change in one component influences others, as an important determinant of improvement rates. The models predict that improvement rate for a domain is proportional to the inverse of the domain’s interaction parameter. However, no empirical research has previously studied and tested the dependence of improvement rates on artifact interactions. A challenge to testing the dependence is that any method for measuring interactions has to be applicable to a wide variety of technologies. Here we propose a novel patent-based method that is both technology domain-agnostic and less costly than alternative methods. We use textual content from patent sets in 27 domains to find the influence of interactions on improvement rates. Qualitative analysis identified six specific keywords that signal artifact interactions. Patent sets from each domain were then examined to determine the total count of these 6 keywords in each domain, giving an estimate of artifact interactions in each domain. It is found that improvement rates are positively correlated with the inverse of the total count of keywords with Pearson correlation coefficient of +0.56 with a p-value of 0.002. The results agree with model predictions, and provide, for the first time, empirical evidence that artifact interactions have a retarding effect on improvement rates of technological domains. PMID:28777798

  15. Classical Item Analysis Using Latent Variable Modeling: A Note on a Direct Evaluation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2011-01-01

    A directly applicable latent variable modeling procedure for classical item analysis is outlined. The method allows one to point and interval estimate item difficulty, item correlations, and item-total correlations for composites consisting of categorical items. The approach is readily employed in empirical research and as a by-product permits…

  16. Method for stationarity-segmentation of spike train data with application to the Pearson cross-correlation.

    PubMed

    Quiroga-Lombard, Claudio S; Hass, Joachim; Durstewitz, Daniel

    2013-07-01

    Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then "slicing" spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains.

  17. Principal Selection: A National Study of Selection Criteria and Procedures

    ERIC Educational Resources Information Center

    Palmer, Brandon

    2017-01-01

    Despite empirical evidence correlating the role of the principal with student achievement, researchers have seldom scrutinized principal selection methods over the past 60 years. This mixed methods study investigated the processes by which school principals are selected. A national sample of top-level school district administrators was used to…

  18. Principal Selection: A National Study of Selection Criteria and Procedures

    ERIC Educational Resources Information Center

    Palmer, Brandon

    2016-01-01

    Despite empirical evidence correlating the role of the principal with student achievement, researchers have seldom scrutinized principal selection methods over the past 60 years. This mixed methods study investigated the processes by which school principals are selected. A national sample of top-level school district administrators was used to…

  19. Interaction energies for the purine inhibitor roscovitine with cyclin-dependent kinase 2: correlated ab initio quantum-chemical, DFT and empirical calculations.

    PubMed

    Dobes, Petr; Otyepka, Michal; Strnad, Miroslav; Hobza, Pavel

    2006-05-24

    The interaction between roscovitine and cyclin-dependent kinase 2 (cdk2) was investigated by performing correlated ab initio quantum-chemical calculations. The whole protein was fragmented into smaller systems consisting of one or a few amino acids, and the interaction energies of these fragments with roscovitine were determined by using the MP2 method with the extended aug-cc-pVDZ basis set. For selected complexes, the complete basis set limit MP2 interaction energies, as well as the coupled-cluster corrections with inclusion of single, double and noninteractive triples contributions [CCSD(T)], were also evaluated. The energies of interaction between roscovitine and small fragments and between roscovitine and substantial sections of protein (722 atoms) were also computed by using density-functional tight-binding methods covering dispersion energy (DFTB-D) and the Cornell empirical potential. Total stabilisation energy originates predominantly from dispersion energy and methods that do not account for the dispersion energy cannot, therefore, be recommended for the study of protein-inhibitor interactions. The Cornell empirical potential describes reasonably well the interaction between roscovitine and protein; therefore, this method can be applied in future thermodynamic calculations. A limited number of amino acid residues contribute significantly to the binding of roscovitine and cdk2, whereas a rather large number of amino acids make a negligible contribution.

  20. New Approaches in Force-Limited Vibration Testing of Flight Hardware

    NASA Technical Reports Server (NTRS)

    Kolaini, Ali R.; Kern, Dennis L.

    2012-01-01

    To qualify flight hardware for random vibration environments the following methods are used to limit the loads in the aerospace industry: (1) Response limiting and notching (2) Simple TDOF model (3) Semi-empirical force limits (4) Apparent mass, etc. and (5) Impedance method. In all these methods attempts are made to remove conservatism due to the mismatch in impedances between the test and the flight configurations of the hardware that are being qualified. Assumption is the hardware interfaces have correlated responses. A new method that takes into account the un-correlated hardware interface responses are described in this presentation.

  1. Electronic Structures of Anti-Ferromagnetic Tetraradicals: Ab Initio and Semi-Empirical Studies.

    PubMed

    Zhang, Dawei; Liu, Chungen

    2016-04-12

    The energy relationships and electronic structures of the lowest-lying spin states in several anti-ferromagnetic tetraradical model systems are studied with high-level ab initio and semi-empirical methods. The Full-CI method (FCI), the complete active space second-order perturbation theory (CASPT2), and the n-electron valence state perturbation theory (NEVPT2) are employed to obtain reference results. By comparing the energy relationships predicted from the Heisenberg and Hubbard models with ab initio benchmarks, the accuracy of the widely used Heisenberg model for anti-ferromagnetic spin-coupling in low-spin polyradicals is cautiously tested in this work. It is found that the strength of electron correlation (|U/t|) concerning anti-ferromagnetically coupled radical centers could range widely from strong to moderate correlation regimes and could become another degree of freedom besides the spin multiplicity. Accordingly, the Heisenberg-type model works well in the regime of strong correlation, which reproduces well the energy relationships along with the wave functions of all the spin states. In moderately spin-correlated tetraradicals, the results of the prototype Heisenberg model deviate severely from those of multi-reference electron correlation ab initio methods, while the extended Heisenberg model, containing four-body terms, can introduce reasonable corrections and maintains its accuracy in this condition. In the weak correlation regime, both the prototype Heisenberg model and its extended forms containing higher-order correction terms will encounter difficulties. Meanwhile, the Hubbard model shows balanced accuracy from strong to weak correlation cases and can reproduce qualitatively correct electronic structures, which makes it more suitable for the study of anti-ferromagnetic coupling in polyradical systems.

  2. Bearing Fault Detection Based on Empirical Wavelet Transform and Correlated Kurtosis by Acoustic Emission.

    PubMed

    Gao, Zheyu; Lin, Jing; Wang, Xiufeng; Xu, Xiaoqiang

    2017-05-24

    Rolling bearings are widely used in rotating equipment. Detection of bearing faults is of great importance to guarantee safe operation of mechanical systems. Acoustic emission (AE), as one of the bearing monitoring technologies, is sensitive to weak signals and performs well in detecting incipient faults. Therefore, AE is widely used in monitoring the operating status of rolling bearing. This paper utilizes Empirical Wavelet Transform (EWT) to decompose AE signals into mono-components adaptively followed by calculation of the correlated kurtosis (CK) at certain time intervals of these components. By comparing these CK values, the resonant frequency of the rolling bearing can be determined. Then the fault characteristic frequencies are found by spectrum envelope. Both simulation signal and rolling bearing AE signals are used to verify the effectiveness of the proposed method. The results show that the new method performs well in identifying bearing fault frequency under strong background noise.

  3. The role of hip and chest radiographs in osteoporotic evaluation among south Indian women population: a comparative scenario with DXA.

    PubMed

    Kumar, D Ashok; Anburajan, M

    2014-05-01

    Osteoporosis is recognized as a worldwide skeletal disorder problem. In India, the older as well as postmenopausal women population suffering from osteoporotic fractures has been a common issue. Bone mineral density measurements gauged by dual-energy X-ray absorptiometry (DXA) are used in the diagnosis of osteoporosis. (1) To evaluate osteoporosis in south Indian women by radiogrammetric method in a comparative perspective with DXA. (2) To assess the capability of KJH; Anburajan's Empirical formula in the prediction of total hip bone mineral density (T.BMD) with estimated Hologic T.BMD. In this cross-sectional design, 56 south Indian women were evaluated. These women were randomly selected from a health camp. The patients with secondary bone diseases were excluded. The standard protocol was followed in acquiring BMD of the right proximal femur by DPX Prodigy (DXA Scanner, GE-Lunar Corp., USA). The measured Lunar Total hip BMD was converted into estimated Hologic Total hip BMD. In addition, the studied population underwent chest and hip radiographic measurements. Combined cortical thickness of clavicle has been used in KJH; Anburajan's Empirical formula to predict T.BMD and compared with estimated Hologic T.BMD by DXA. The correlation coefficients exhibited high significance. The combined cortical thickness of clavicle and femur shaft of total studied population was strongly correlated with DXA femur T.BMD measurements (r = 0.87, P < 0.01 and r = 0.45, P < 0.01) and it is also having strong correlation with low bone mass group (r = 0.87, P < 0.01 and r = 0.67, P < 0.01) KJH; Anburajan's Empirical formula shows significant correlation with estimated Hologic T.BMD (r = 0.88, P < 0.01) in total studied population. The empirical formula was identified as better tool for predicting osteoporosis in total population and old-aged population with a sensitivity (88.8 and 95.6 %), specificity (89.6 and 90.9 %), positive predictive value (88.8 and 95.6 %) and negative predictive value (89.6 and 90.9 %), respectively. The results suggest that combined cortical thickness of clavicle and femur shaft using radiogrammetric method is significantly correlated with DXA. Moreover, KJH; Anburajan's Empirical formula is useful and better index than other simple radiogrammetry measurements in the evaluation of osteoporosis from the economical and widely available digital radiographs.

  4. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  5. Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers

    PubMed Central

    García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta

    2016-01-01

    The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine. PMID:28773653

  6. Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers.

    PubMed

    García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta

    2016-06-29

    The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine.

  7. An applicable method for efficiency estimation of operating tray distillation columns and its comparison with the methods utilized in HYSYS and Aspen Plus

    NASA Astrophysics Data System (ADS)

    Sadeghifar, Hamidreza

    2015-10-01

    Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.

  8. The conceptual and empirical relationship between gambling, investing, and speculation

    PubMed Central

    Arthur, Jennifer N.; Williams, Robert J.; Delfabbro, Paul H.

    2016-01-01

    Background and aims To review the conceptual and empirical relationship between gambling, investing, and speculation. Methods An analysis of the attributes differentiating these constructs as well as identification of all articles speaking to their empirical relationship. Results Gambling differs from investment on many different attributes and should be seen as conceptually distinct. On the other hand, speculation is conceptually intermediate between gambling and investment, with a few of its attributes being investment-like, some of its attributes being gambling-like, and several of its attributes being neither clearly gambling or investment-like. Empirically, gamblers, investors, and speculators have similar cognitive, motivational, and personality attributes, with this relationship being particularly strong for gambling and speculation. Population levels of gambling activity also tend to be correlated with population level of financial speculation. At an individual level, speculation has a particularly strong empirical relationship to gambling, as speculators appear to be heavily involved in traditional forms of gambling and problematic speculation is strongly correlated with problematic gambling. Discussion and conclusions Investment is distinct from gambling, but speculation and gambling have conceptual overlap and a strong empirical relationship. It is recommended that financial speculation be routinely included when assessing gambling involvement, and there needs to be greater recognition and study of financial speculation as both a contributor to problem gambling as well as an additional form of behavioral addiction in its own right. PMID:27929350

  9. Simple, empirical approach to predict neutron capture cross sections from nuclear masses

    NASA Astrophysics Data System (ADS)

    Couture, A.; Casten, R. F.; Cakirli, R. B.

    2017-12-01

    Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.

  10. Multifractal Cross Wavelet Analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Gao, Xing-Lu; Zhou, Wei-Xing; Stanley, H. Eugene

    Complex systems are composed of mutually interacting components and the output values of these components usually exhibit long-range cross-correlations. Using wavelet analysis, we propose a method of characterizing the joint multifractal nature of these long-range cross correlations, a method we call multifractal cross wavelet analysis (MFXWT). We assess the performance of the MFXWT method by performing extensive numerical experiments on the dual binomial measures with multifractal cross correlations and the bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. For binomial multifractal measures, we find the empirical joint multifractality of MFXWT to be in approximate agreement with the theoretical formula. For bFBMs, MFXWT may provide spurious multifractality because of the wide spanning range of the multifractal spectrum. We also apply the MFXWT method to stock market indices, and in pairs of index returns and volatilities we find an intriguing joint multifractal behavior. The tests on surrogate series also reveal that the cross correlation behavior, particularly the cross correlation with zero lag, is the main origin of cross multifractality.

  11. Jet Aeroacoustics: Noise Generation Mechanism and Prediction

    NASA Technical Reports Server (NTRS)

    Tam, Christopher

    1998-01-01

    This report covers the third year research effort of the project. The research work focussed on the fine scale mixing noise of both subsonic and supersonic jets and the effects of nozzle geometry and tabs on subsonic jet noise. In publication 1, a new semi-empirical theory of jet mixing noise from fine scale turbulence is developed. By an analogy to gas kinetic theory, it is shown that the source of noise is related to the time fluctuations of the turbulence kinetic theory. On starting with the Reynolds Averaged Navier-Stokes equations, a formula for the radiated noise is derived. An empirical model of the space-time correlation function of the turbulence kinetic energy is adopted. The form of the model is in good agreement with the space-time two-point velocity correlation function measured by Davies and coworkers. The parameters of the correlation are related to the parameters of the k-epsilon turbulence model. Thus the theory is self-contained. Extensive comparisons between the computed noise spectrum of the theory and experimental measured have been carried out. The parameters include jet Mach number from 0.3 to 2.0 and temperature ratio from 1.0 to 4.8. Excellent agreements are found in the spectrum shape, noise intensity and directivity. It is envisaged that the theory would supercede all semi-empirical and totally empirical jet noise prediction methods in current use.

  12. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  13. Multifractality, efficiency analysis of Chinese stock market and its cross-correlation with WTI crude oil price

    NASA Astrophysics Data System (ADS)

    Zhuang, Xiaoyang; Wei, Yu; Ma, Feng

    2015-07-01

    In this paper, the multifractality and efficiency degrees of ten important Chinese sectoral indices are evaluated using the methods of MF-DFA and generalized Hurst exponents. The study also scrutinizes the dynamics of the efficiency of Chinese sectoral stock market by the rolling window approach. The overall empirical findings revealed that all the sectoral indices of Chinese stock market exist different degrees of multifractality. The results of different efficiency measures have agreed on that the 300 Materials index is the least efficient index. However, they have a slight diffidence on the most efficient one. The 300 Information Technology, 300 Telecommunication Services and 300 Health Care indices are comparatively efficient. We also investigate the cross-correlations between the ten sectoral indices and WTI crude oil price based on Multifractal Detrended Cross-correlation Analysis. At last, some relevant discussions and implications of the empirical results are presented.

  14. An asymptotic theory for cross-correlation between auto-correlated sequences and its application on neuroimaging data.

    PubMed

    Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng

    2018-04-20

    Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.

  15. Novel Multidimensional Cross-Correlation Data Comparison Techniques for Spectroscopic Discernment in a Volumetrically Sensitive, Moderating Type Neutron Spectrometer

    NASA Astrophysics Data System (ADS)

    Hoshor, Cory; Young, Stephan; Rogers, Brent; Currie, James; Oakes, Thomas; Scott, Paul; Miller, William; Caruso, Anthony

    2014-03-01

    A novel application of the Pearson Cross-Correlation to neutron spectral discernment in a moderating type neutron spectrometer is introduced. This cross-correlation analysis will be applied to spectral response data collected through both MCNP simulation and empirical measurement by the volumetrically sensitive spectrometer for comparison in 1, 2, and 3 spatial dimensions. The spectroscopic analysis methods discussed will be demonstrated to discern various common spectral and monoenergetic neutron sources.

  16. Consequence Assessment Methods for Incidents Involving Releases From Liquefied Natural Gas Carriers

    DTIC Science & Technology

    2004-05-13

    the downwind direction. The Thomas (1965) correlation is used to calculate flame length . Flame tilt is estimated using an empirical correlation from...follows: From TNO (1997) • Thomas (1963) correlation for flame length • For an experimental LNG pool fire of 16.8-m diameter, a mass burning flux of...m, flame length ranged from 50 to 78 m, and tilt angle from 27 to 35 degrees From Rew (1996) • Work included a review of recent developments in

  17. Long-range correlation and market segmentation in bond market

    NASA Astrophysics Data System (ADS)

    Wang, Zhongxing; Yan, Yan; Chen, Xiaosong

    2017-09-01

    This paper investigates the long-range auto-correlations and cross-correlations in bond market. Based on Detrended Moving Average (DMA) method, empirical results present a clear evidence of long-range persistence that exists in one year scale. The degree of long-range correlation related to maturities has an upward tendency with a peak in short term. These findings confirm the expectations of fractal market hypothesis (FMH). Furthermore, we have developed a method based on a complex network to study the long-range cross-correlation structure and applied it to our data, and found a clear pattern of market segmentation in the long run. We also detected the nature of long-range correlation in the sub-period 2007-2012 and 2011-2016. The result from our research shows that long-range auto-correlations are decreasing in the recent years while long-range cross-correlations are strengthening.

  18. Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction

    NASA Astrophysics Data System (ADS)

    Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele

    2017-09-01

    Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.

  19. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.; Schifer, Nicholas A.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including testing validation hardware, known as the Thermal Standard, to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. This validation hardware provided a comparison for scrutinizing and improving empirical correlations and numerical models of ASC-E2 net heat input. This hardware simulated the characteristics of an ASC-E2 convertor in both an operating and non-operating mode. This paper describes the Thermal Standard testing and the conclusions of the validation effort applied to the empirical correlation methods used by the Radioisotope Power System (RPS) team at NASA Glenn.

  20. Changes in the Amplitude and Phase of the Annual Cycle: quantifying from surface wind series in China

    NASA Astrophysics Data System (ADS)

    Feng, Tao

    2013-04-01

    Climate change is not only reflected in the changes in annual means of climate variables but also in the changes in their annual cycles (seasonality), especially in the regions outside the tropics. Changes in the timing of seasons, especially the wind season, have gained much attention worldwide in recent decade or so. We introduce long-range correlated surrogate data to Ensemble Empirical Mode Decomposition method, which represent the statistic characteristics of data better than white noise. The new method we named Ensemble Empirical Mode Decomposition with Long-range Correlated noise (EEMD-LRC) and applied to 600 station wind speed records. This new method is applied to investigate the trend in the amplitude of the annual cycle of China's daily mean surface wind speed for the period 1971-2005. The amplitude of seasonal variation decrease significantly in the past half century over China, which can be well explained by Annual Cycle component from EEMD-LRC. Furthermore, the phase change of annual cycle lead to strongly shorten of wind season in spring, and corresponding with strong windy day frequency change over Northern China.

  1. A Compound Fault Diagnosis for Rolling Bearings Method Based on Blind Source Separation and Ensemble Empirical Mode Decomposition

    PubMed Central

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals’ separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644

  2. Ab Initio and Improved Empirical Potentials for the Calculation of the Anharmonic Vibrational States and Intramolecular Mode Coupling of N-Methylacetamide

    NASA Technical Reports Server (NTRS)

    Gregurick, Susan K.; Chaban, Galina M.; Gerber, R. Benny; Kwak, Dochou (Technical Monitor)

    2001-01-01

    The second-order Moller-Plesset ab initio electronic structure method is used to compute points for the anharmonic mode-coupled potential energy surface of N-methylacetamide (NMA) in the trans(sub ct) configuration, including all degrees of freedom. The vibrational states and the spectroscopy are directly computed from this potential surface using the Correlation Corrected Vibrational Self-Consistent Field (CC-VSCF) method. The results are compared with CC-VSCF calculations using both the standard and improved empirical Amber-like force fields and available low temperature experimental matrix data. Analysis of our calculated spectroscopic results show that: (1) The excellent agreement between the ab initio CC-VSCF calculated frequencies and the experimental data suggest that the computed anharmonic potentials for N-methylacetamide are of a very high quality; (2) For most transitions, the vibrational frequencies obtained from the ab initio CC-VSCF method are superior to those obtained using the empirical CC-VSCF methods, when compared with experimental data. However, the improved empirical force field yields better agreement with the experimental frequencies as compared with a standard AMBER-type force field; (3) The empirical force field in particular overestimates anharmonic couplings for the amide-2 mode, the methyl asymmetric bending modes, the out-of-plane methyl bending modes, and the methyl distortions; (4) Disagreement between the ab initio and empirical anharmonic couplings is greater than the disagreement between the frequencies, and thus the anharmonic part of the empirical potential seems to be less accurate than the harmonic contribution;and (5) Both the empirical and ab initio CC-VSCF calculations predict a negligible anharmonic coupling between the amide-1 and other internal modes. The implication of this is that the intramolecular energy flow between the amide-1 and the other internal modes may be smaller than anticipated. These results may have important implications for the anharmonic force fields of peptides, for which N-methylacetamide is a model.

  3. Improved RMR Rock Mass Classification Using Artificial Intelligence Algorithms

    NASA Astrophysics Data System (ADS)

    Gholami, Raoof; Rasouli, Vamegh; Alimoradi, Andisheh

    2013-09-01

    Rock mass classification systems such as rock mass rating (RMR) are very reliable means to provide information about the quality of rocks surrounding a structure as well as to propose suitable support systems for unstable regions. Many correlations have been proposed to relate measured quantities such as wave velocity to rock mass classification systems to limit the associated time and cost of conducting the sampling and mechanical tests conventionally used to calculate RMR values. However, these empirical correlations have been found to be unreliable, as they usually overestimate or underestimate the RMR value. The aim of this paper is to compare the results of RMR classification obtained from the use of empirical correlations versus machine-learning methodologies based on artificial intelligence algorithms. The proposed methods were verified based on two case studies located in northern Iran. Relevance vector regression (RVR) and support vector regression (SVR), as two robust machine-learning methodologies, were used to predict the RMR for tunnel host rocks. RMR values already obtained by sampling and site investigation at one tunnel were taken into account as the output of the artificial networks during training and testing phases. The results reveal that use of empirical correlations overestimates the predicted RMR values. RVR and SVR, however, showed more reliable results, and are therefore suggested for use in RMR classification for design purposes of rock structures.

  4. Empirical analysis of online human dynamics

    NASA Astrophysics Data System (ADS)

    Zhao, Zhi-Dan; Zhou, Tao

    2012-06-01

    Patterns of human activities have attracted increasing academic interests, since the quantitative understanding of human behavior is helpful to uncover the origins of many socioeconomic phenomena. This paper focuses on behaviors of Internet users. Six large-scale systems are studied in our experiments, including the movie-watching in Netflix and MovieLens, the transaction in Ebay, the bookmark-collecting in Delicious, and the posting in FreindFeed and Twitter. Empirical analysis reveals some common statistical features of online human behavior: (1) The total number of user's actions, the user's activity, and the interevent time all follow heavy-tailed distributions. (2) There exists a strongly positive correlation between user's activity and the total number of user's actions, and a significantly negative correlation between the user's activity and the width of the interevent time distribution. We further study the rescaling method and show that this method could to some extent eliminate the different statistics among users caused by the different activities, yet the effectiveness depends on the data sets.

  5. Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2013-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  6. Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2012-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  7. efficient association study design via power-optimized tag SNP selection

    PubMed Central

    HAN, BUHM; KANG, HYUN MIN; SEO, MYEONG SEONG; ZAITLEN, NOAH; ESKIN, ELEAZAR

    2008-01-01

    Discovering statistical correlation between causal genetic variation and clinical traits through association studies is an important method for identifying the genetic basis of human diseases. Since fully resequencing a cohort is prohibitively costly, genetic association studies take advantage of local correlation structure (or linkage disequilibrium) between single nucleotide polymorphisms (SNPs) by selecting a subset of SNPs to be genotyped (tag SNPs). While many current association studies are performed using commercially available high-throughput genotyping products that define a set of tag SNPs, choosing tag SNPs remains an important problem for both custom follow-up studies as well as designing the high-throughput genotyping products themselves. The most widely used tag SNP selection method optimizes over the correlation between SNPs (r2). However, tag SNPs chosen based on an r2 criterion do not necessarily maximize the statistical power of an association study. We propose a study design framework that chooses SNPs to maximize power and efficiently measures the power through empirical simulation. Empirical results based on the HapMap data show that our method gains considerable power over a widely used r2-based method, or equivalently reduces the number of tag SNPs required to attain the desired power of a study. Our power-optimized 100k whole genome tag set provides equivalent power to the Affymetrix 500k chip for the CEU population. For the design of custom follow-up studies, our method provides up to twice the power increase using the same number of tag SNPs as r2-based methods. Our method is publicly available via web server at http://design.cs.ucla.edu. PMID:18702637

  8. Using brain stimulation to disentangle neural correlates of conscious vision

    PubMed Central

    de Graaf, Tom A.; Sack, Alexander T.

    2014-01-01

    Research into the neural correlates of consciousness (NCCs) has blossomed, due to the advent of new and increasingly sophisticated brain research tools. Neuroimaging has uncovered a variety of brain processes that relate to conscious perception, obtained in a range of experimental paradigms. But methods such as functional magnetic resonance imaging or electroencephalography do not always afford inference on the functional role these brain processes play in conscious vision. Such empirical NCCs could reflect neural prerequisites, neural consequences, or neural substrates of a conscious experience. Here, we take a closer look at the use of non-invasive brain stimulation (NIBS) techniques in this context. We discuss and review how NIBS methodology can enlighten our understanding of brain mechanisms underlying conscious vision by disentangling the empirical NCCs. PMID:25295015

  9. EEG artifacts reduction by multivariate empirical mode decomposition and multiscale entropy for monitoring depth of anaesthesia during surgery.

    PubMed

    Liu, Quan; Chen, Yi-Feng; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2017-08-01

    Electroencephalography (EEG) has been widely utilized to measure the depth of anaesthesia (DOA) during operation. However, the EEG signals are usually contaminated by artifacts which have a consequence on the measured DOA accuracy. In this study, an effective and useful filtering algorithm based on multivariate empirical mode decomposition and multiscale entropy (MSE) is proposed to measure DOA. Mean entropy of MSE is used as an index to find artifacts-free intrinsic mode functions. The effect of different levels of artifacts on the performances of the proposed filtering is analysed using simulated data. Furthermore, 21 patients' EEG signals are collected and analysed using sample entropy to calculate the complexity for monitoring DOA. The correlation coefficients of entropy and bispectral index (BIS) results show 0.14 ± 0.30 and 0.63 ± 0.09 before and after filtering, respectively. Artificial neural network (ANN) model is used for range mapping in order to correlate the measurements with BIS. The ANN method results show strong correlation coefficient (0.75 ± 0.08). The results in this paper verify that entropy values and BIS have a strong correlation for the purpose of DOA monitoring and the proposed filtering method can effectively filter artifacts from EEG signals. The proposed method performs better than the commonly used wavelet denoising method. This study provides a fully adaptive and automated filter for EEG to measure DOA more accuracy and thus reduce risk related to maintenance of anaesthetic agents.

  10. Empirical Prediction of Aircraft Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Guo, Yue-Ping

    2005-01-01

    This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.

  11. Atlas of susceptibility to pollution in marinas. Application to the Spanish coast.

    PubMed

    Gómez, Aina G; Ondiviela, Bárbara; Fernández, María; Juanes, José A

    2017-01-15

    An atlas of susceptibility to pollution of 320 Spanish marinas is provided. Susceptibility is assessed through a simple, fast and low cost empirical method estimating the flushing capacity of marinas. The Complexity Tidal Range Index (CTRI) was selected among eleven empirical methods. The CTRI method was selected by means of statistical analyses because: it contributes to explain the system's variance; it is highly correlated to numerical model results; and, it is sensitive to marinas' location and typology. The process of implementation to the Spanish coast confirmed its usefulness, versatility and adaptability as a tool for the environmental management of marinas worldwide. The atlas of susceptibility, assessed through CTRI values, is an appropriate instrument to prioritize environmental and planning strategies at a regional scale. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1980-01-01

    The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.

  13. Interrogative suggestibility: its relationship with assertiveness, social-evaluative anxiety, state anxiety and method of coping.

    PubMed

    Gudjonsson, G H

    1988-05-01

    This paper attempts to investigate empirically in 30 subjects some of the theoretical components related to individual differences that are thought by Gudjonsson & Clark (1986) to mediate interrogative suggestibility as measured by the Gudjonsson Suggestibility Scale (GSS; Gudjonsson, 1984a). The variables studied were: assertiveness, social-evaluative anxiety, state anxiety and the coping methods subjects are able to generate and implement during interrogation. Low assertiveness and high evaluative anxiety were found to correlate moderately with suggestibility, but no significant correlations emerged for 'social avoidance and distress'. State anxiety correlated significantly with suggestibility, particularly after negative feedback had been administered. Coping methods (active-cognitive/behavioural vs. avoidance) significantly predicted suggestibility scores. The findings give strong support to the theoretical model of Gudjonsson & Clark.

  14. Combined magnetic and gravity analysis

    NASA Technical Reports Server (NTRS)

    Hinze, W. J.; Braile, L. W.; Chandler, V. W.; Mazella, F. E.

    1975-01-01

    Efforts are made to identify methods of decreasing magnetic interpretation ambiguity by combined gravity and magnetic analysis, to evaluate these techniques in a preliminary manner, to consider the geologic and geophysical implications of correlation, and to recommend a course of action to evaluate methods of correlating gravity and magnetic anomalies. The major thrust of the study was a search and review of the literature. The literature of geophysics, geology, geography, and statistics was searched for articles dealing with spatial correlation of independent variables. An annotated bibliography referencing the Germane articles and books is presented. The methods of combined gravity and magnetic analysis techniques are identified and reviewed. A more comprehensive evaluation of two types of techniques is presented. Internal correspondence of anomaly amplitudes is examined and a combined analysis is done utilizing Poisson's theorem. The geologic and geophysical implications of gravity and magnetic correlation based on both theoretical and empirical relationships are discussed.

  15. Design of exchange-correlation functionals through the correlation factor approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlíková Přecechtělová, Jana, E-mail: j.precechtelova@gmail.com, E-mail: Matthias.Ernzerhof@UMontreal.ca; Institut für Chemie, Theoretische Chemie / Quantenchemie, Sekr. C7, Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin; Bahmann, Hilke

    The correlation factor model is developed in which the spherically averaged exchange-correlation hole of Kohn-Sham theory is factorized into an exchange hole model and a correlation factor. The exchange hole model reproduces the exact exchange energy per particle. The correlation factor is constructed in such a manner that the exchange-correlation energy correctly reduces to exact exchange in the high density and rapidly varying limits. Four different correlation factor models are presented which satisfy varying sets of physical constraints. Three models are free from empirical adjustments to experimental data, while one correlation factor model draws on one empirical parameter. The correlationmore » factor models are derived in detail and the resulting exchange-correlation holes are analyzed. Furthermore, the exchange-correlation energies obtained from the correlation factor models are employed to calculate total energies, atomization energies, and barrier heights. It is shown that accurate, non-empirical functionals can be constructed building on exact exchange. Avenues for further improvements are outlined as well.« less

  16. Re-evaluating the link between brain size and behavioural ecology in primates.

    PubMed

    Powell, Lauren E; Isler, Karin; Barton, Robert A

    2017-10-25

    Comparative studies have identified a wide range of behavioural and ecological correlates of relative brain size, with results differing between taxonomic groups, and even within them. In primates for example, recent studies contradict one another over whether social or ecological factors are critical. A basic assumption of such studies is that with sufficiently large samples and appropriate analysis, robust correlations indicative of selection pressures on cognition will emerge. We carried out a comprehensive re-examination of correlates of primate brain size using two large comparative datasets and phylogenetic comparative methods. We found evidence in both datasets for associations between brain size and ecological variables (home range size, diet and activity period), but little evidence for an effect of social group size, a correlation which has previously formed the empirical basis of the Social Brain Hypothesis. However, reflecting divergent results in the literature, our results exhibited instability across datasets, even when they were matched for species composition and predictor variables. We identify several potential empirical and theoretical difficulties underlying this instability and suggest that these issues raise doubts about inferring cognitive selection pressures from behavioural correlates of brain size. © 2017 The Author(s).

  17. Evaluation of semi-empirical analyses for tank car puncture velocity, part II : correlations with engineering analyses

    DOT National Transportation Integrated Search

    2001-11-01

    This report is the second in a series focusing on methods to determine the puncture velocity of railroad tank car shells. In this context, puncture velocity refers to the impact velocity at which a coupler will completely pierce the shell and punctur...

  18. Evaluation of semi-empirical analyses for railroad tank car puncture velocity, part 2 : correlations with engineering analysis

    DOT National Transportation Integrated Search

    2001-11-01

    This report is the second in a series focusing on methods to determine the puncture velocity of railroad tank car shells. In this : context, puncture velocity refers to the impact velocity at which a coupler will completely pierce the shell and punct...

  19. Fine structure of spectral properties for random correlation matrices: An application to financial markets

    NASA Astrophysics Data System (ADS)

    Livan, Giacomo; Alfarano, Simone; Scalas, Enrico

    2011-07-01

    We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross correlations between stocks. We interpret and corroborate these findings in terms of factor models, and we compare empirical spectra to those predicted by random matrix theory for such models.

  20. GPR random noise reduction using BPD and EMD

    NASA Astrophysics Data System (ADS)

    Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz

    2018-04-01

    Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.

  1. Correlation matrix renormalization theory for correlated-electron materials with application to the crystalline phases of atomic hydrogen

    DOE PAGES

    Zhao, Xin; Liu, Jun; Yao, Yong-Xin; ...

    2018-01-23

    Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less

  2. Correlation matrix renormalization theory for correlated-electron materials with application to the crystalline phases of atomic hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Xin; Liu, Jun; Yao, Yong-Xin

    Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less

  3. An empirical comparative study on biological age estimation algorithms with an application of Work Ability Index (WAI).

    PubMed

    Cho, Il Haeng; Park, Kyung S; Lim, Chang Joo

    2010-02-01

    In this study, we described the characteristics of five different biological age (BA) estimation algorithms, including (i) multiple linear regression, (ii) principal component analysis, and somewhat unique methods developed by (iii) Hochschild, (iv) Klemera and Doubal, and (v) a variant of Klemera and Doubal's method. The objective of this study is to find the most appropriate method of BA estimation by examining the association between Work Ability Index (WAI) and the differences of each algorithm's estimates from chronological age (CA). The WAI was found to be a measure that reflects an individual's current health status rather than the deterioration caused by a serious dependency with the age. Experiments were conducted on 200 Korean male participants using a BA estimation system developed principally under the concept of non-invasive, simple to operate and human function-based. Using the empirical data, BA estimation as well as various analyses including correlation analysis and discriminant function analysis was performed. As a result, it had been confirmed by the empirical data that Klemera and Doubal's method with uncorrelated variables from principal component analysis produces relatively reliable and acceptable BA estimates. 2009 Elsevier Ireland Ltd. All rights reserved.

  4. Energy performance assessment with empirical methods: application of energy signature

    NASA Astrophysics Data System (ADS)

    Belussi, L.; Danza, L.; Meroni, I.; Salamone, F.

    2015-03-01

    Energy efficiency and reduction of building consumption are deeply felt issues both at Italian and international level. The recent regulatory framework sets stringent limits on energy performance of buildings. Awaiting the adoption of these principles, several methods have been developed to solve the problem of energy consumption of buildings, among which the simplified energy audit is intended to identify any anomalies in the building system, to provide helpful tips for energy refurbishments and to raise end users' awareness. The Energy Signature is an operational tool of these methodologies, an evaluation method in which energy consumption is correlated with climatic variables, representing the actual energy behaviour of the building. In addition to that purpose, the Energy Signature can be used as an empirical tool to determine the real performances of the technical elements. The latter aspect is illustrated in this article.

  5. Application of empirical Bayes methods to predict the rate of decline in ERG at the individual level among patients with retinitis pigmentosa.

    PubMed

    Qiu, Weiliang; Sandberg, Michael A; Rosner, Bernard

    2018-05-31

    Retinitis pigmentosa is one of the most common forms of inherited retinal degeneration. The electroretinogram (ERG) can be used to determine the severity of retinitis pigmentosa-the lower the ERG amplitude, the more severe the disease is. In practice for career, lifestyle, and treatment counseling, it is of interest to predict the ERG amplitude of a patient at a future time. One approach is prediction based on the average rate of decline for individual patients. However, there is considerable variation both in initial amplitude and in rate of decline. In this article, we propose an empirical Bayes (EB) approach to incorporate the variations in initial amplitude and rate of decline for the prediction of ERG amplitude at the individual level. We applied the EB method to a collection of ERGs from 898 patients with 3 or more visits over 5 or more years of follow-up tested in the Berman-Gund Laboratory and observed that the predicted values at the last (kth) visit obtained by using the proposed method based on data for the first k-1 visits are highly correlated with the observed values at the kth visit (Spearman correlation =0.93) and have a higher correlation with the observed values than those obtained based on either the population average decline rate or those obtained based on the individual decline rate. The mean square errors for predicted values obtained by the EB method are also smaller than those predicted by the other methods. Copyright © 2018 John Wiley & Sons, Ltd.

  6. AN EMPIRICAL INVESTIGATION OF THE EFFECTS OF NONNORMALITY UPON THE SAMPLING DISTRIBUTION OF THE PROJECT MOMENT CORRELATION COEFFICIENT.

    ERIC Educational Resources Information Center

    HJELM, HOWARD; NORRIS, RAYMOND C.

    THE STUDY EMPIRICALLY DETERMINED THE EFFECTS OF NONNORMALITY UPON SOME SAMPLING DISTRIBUTIONS OF THE PRODUCT MOMENT CORRELATION COEFFICIENT (PMCC). SAMPLING DISTRIBUTIONS OF THE PMCC WERE OBTAINED BY DRAWING NUMEROUS SAMPLES FROM CONTROL AND EXPERIMENTAL POPULATIONS HAVING VARIOUS DEGREES OF NONNORMALITY AND BY CALCULATING CORRELATION COEFFICIENTS…

  7. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    PubMed Central

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua

    2015-01-01

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985

  8. Improved estimation of subject-level functional connectivity using full and partial correlation with empirical Bayes shrinkage.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A

    2018-05-15

    Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.

  9. Analytical Fuselage and Wing Weight Estimation of Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.

    1996-01-01

    A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.

  10. Analysis of network clustering behavior of the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Chen, Huan; Mai, Yong; Li, Sai-Ping

    2014-11-01

    Random Matrix Theory (RMT) and the decomposition of correlation matrix method are employed to analyze spatial structure of stocks interactions and collective behavior in the Shanghai and Shenzhen stock markets in China. The result shows that there exists prominent sector structures, with subsectors including the Real Estate (RE), Commercial Banks (CB), Pharmaceuticals (PH), Distillers&Vintners (DV) and Steel (ST) industries. Furthermore, the RE and CB subsectors are mostly anti-correlated. We further study the temporal behavior of the dataset and find that while the sector structures are relatively stable from 2007 through 2013, the correlation between the real estate and commercial bank stocks shows large variations. By employing the ensemble empirical mode decomposition (EEMD) method, we show that this anti-correlation behavior is closely related to the monetary and austerity policies of the Chinese government during the period of study.

  11. The scale-dependent market trend: Empirical evidences using the lagged DFA method

    NASA Astrophysics Data System (ADS)

    Li, Daye; Kou, Zhun; Sun, Qiankun

    2015-09-01

    In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.

  12. Noise sensitivity of portfolio selection in constant conditional correlation GARCH models

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, I.; Kondor, I.

    2007-11-01

    This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.

  13. Developing Reading Identities: Understanding Issues of Motivation within the Reading Workshop

    ERIC Educational Resources Information Center

    Miller, Leigh Ann

    2013-01-01

    Empirical evidence suggests a correlation between motivation and reading achievement as well as a decline in motivation as students progress through the grades. In order to address this issue, it is necessary to determine the instructional methods that promote motivation and identity development in reading. This study examines the motivation and…

  14. Do We Really Know What Makes Educational Software Effective? A Call for Empirical Research on Effectiveness.

    ERIC Educational Resources Information Center

    Jolicoeur, Karen; Berger, Dale E.

    1986-01-01

    Examination of methods used by two software review services in evaluating microcomputer courseware--EPIE (Educational Products Information Exchange) and MicroSIFT (Microcomputer Software and Information for Teachers)--found low correlations between their recommendations for 82 programs. This lack of agreement casts doubts on the usefulness of…

  15. Optical remote sensing and correlation of office equipment functional state and stress levels via power quality disturbances inefficiencies

    NASA Astrophysics Data System (ADS)

    Sternberg, Oren; Bednarski, Valerie R.; Perez, Israel; Wheeland, Sara; Rockway, John D.

    2016-09-01

    Non-invasive optical techniques pertaining to the remote sensing of power quality disturbances (PQD) are part of an emerging technology field typically dominated by radio frequency (RF) and invasive-based techniques. Algorithms and methods to analyze and address PQD such as probabilistic neural networks and fully informed particle swarms have been explored in industry and academia. Such methods are tuned to work with RF equipment and electronics in existing power grids. As both commercial and defense assets are heavily power-dependent, understanding electrical transients and failure events using non-invasive detection techniques is crucial. In this paper we correlate power quality empirical models to the observed optical response. We also empirically demonstrate a first-order approach to map household, office and commercial equipment PQD to user functions and stress levels. We employ a physics-based image and signal processing approach, which demonstrates measured non-invasive (remote sensing) techniques to detect and map the base frequency associated with the power source to the various PQD on a calibrated source.

  16. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  17. Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1996-01-01

    In this report the author describes: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of flight path optimization. A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT bas traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight.

  18. A New Strategy for ECG Baseline Wander Elimination Using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Shahbakhti, Mohammad; Bagheri, Hamed; Shekarchi, Babak; Mohammadi, Somayeh; Naji, Mohsen

    2016-06-01

    Electrocardiogram (ECG) signals might be affected by various artifacts and noises that have biological and external sources. Baseline wander (BW) is a low-frequency artifact that may be caused by breathing, body movements and loose sensor contact. In this paper, a novel method based on empirical mode decomposition (EMD) for removal of baseline noise from ECG is presented. When compared to other EMD-based methods, the novelty of this research is to reach the optimized number of decomposed levels for ECG BW de-noising using mean power frequency (MPF), while the reduction of processing time is considered. To evaluate the performance of the proposed method, a fifth-order Butterworth high pass filtering (BHPF) with cut-off frequency at 0.5Hz and wavelet approach are applied. Three performance indices, signal-to-noise ratio (SNR), mean square error (MSE) and correlation coefficient (CC), between pure and filtered signals have been utilized for qualification of presented techniques. Results suggest that the EMD-based method outperforms the other filtering method.

  19. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  20. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  1. Asymmetric MF-DCCA method based on risk conduction and its application in the Chinese and foreign stock markets

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Han, Yan; Li, Qingchen; Xu, Wei

    2017-02-01

    The acceleration of economic globalization gradually shows the linkage of the stock markets in various counties and produces a risk conduction effect. An asymmetric MF-DCCA method is conducted based on the different directions of risk conduction (DMF-ADCCA) and by using the traditional MF-DCCA. To ensure that the empirical results are more objective and robust, this study selects the stock index data of China, the US, Germany, India, and Brazil from January 2011 to September 2014 using the asymmetric MF-DCCA method based on different risk conduction effects and nonlinear Granger causality tests to study the asymmetric cross-correlation between domestic and foreign stock markets. Empirical results indicate the existence of a bidirectional conduction effect between domestic and foreign stock markets, and the greater influence degree from foreign countries to domestic market compared with that from the domestic market to foreign countries.

  2. Roles of Engineering Correlations in Hypersonic Entry Boundary Layer Transition Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Charles H.; King, Rudolph A.; Kergerise, Michael A.; Berry, Scott A.; Horvath, Thomas J.

    2010-01-01

    Efforts to design and operate hypersonic entry vehicles are constrained by many considerations that involve all aspects of an entry vehicle system. One of the more significant physical phenomenon that affect entry trajectory and thermal protection system design is the occurrence of boundary layer transition from a laminar to turbulent state. During the Space Shuttle Return To Flight activity following the loss of Columbia and her crew of seven, NASA's entry aerothermodynamics community implemented an engineering correlation based framework for the prediction of boundary layer transition on the Orbiter. The methodology for this implementation relies upon the framework of correlation techniques that have been in use for several decades. What makes the Orbiter boundary layer transition correlation implementation unique is that a statistically significant data set was acquired in multiple ground test facilities, flight data exists to assist in establishing a better correlation and the framework was founded upon state of the art chemical nonequilibrium Navier Stokes flow field simulations. The basic tenets that guided the formulation and implementation of the Orbiter Return To Flight boundary layer transition prediction capability will be reviewed as a recommended format for future empirical correlation efforts. The validity of this approach has since been demonstrated by very favorable comparison of recent entry flight testing performed with the Orbiter Discovery, which will be graphically summarized. These flight data can provide a means to validate discrete protuberance engineering correlation approaches as well as high fidelity prediction methods to higher confidence. The results of these Orbiter engineering and flight test activities only serve to reinforce the essential role that engineering correlations currently exercise in the design and operation of entry vehicles. The framework of information-related to the Orbiter empirical boundary layer transition prediction capability will be utilized to establish a fresh perspective on this role, to illustrate how quantitative statistical evaluations of empirical correlations can and should be used to assess accuracy and to discuss what the authors' perceive as a recent heightened interest in the application of high fidelity numerical modeling of boundary layer transition. Concrete results will also be developed related to empirical boundary layer transition onset correlations. This will include assessment of the discrete protuberance boundary layer transition onset data assembled for the Orbiter configuration during post-Columbia Return To Flight. Assessment of these data will conclude that momentum thickness Reynolds number based correlations have superior coefficients and uncertainty in comparison to roughness height based Reynolds numbers, aka Re(sub k) or Re(sub kk). In addition, linear regression results from roughness height Reynolds number based correlations will be evaluated, leading to a hypothesis that non-continuum effects play a role in the processes associated with incipient boundary layer transition on discrete protuberances.

  3. Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆

    PubMed Central

    Tang, Liansheng; Du, Pang; Wu, Chengqing

    2012-01-01

    Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484

  4. Measurement and correlation of the solubility of gossypol acetic acid and gossypol acetic acid of optical activity in different solvents

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Tang, H.; Liu, X. Y.; Zhai, X.; Yao, X. C.

    2018-01-01

    The equilibrium method was used to measure the solubility of gossypol acetic acid and gossypol acetic acid of optical activity in isopropyl alcohol, ethanol, acetic acid and ethyl acetate at temperature from 288.15 to 315.15. The Empirical equation and the Apelblat equation model were adopted to correlate the experimental data. For gossypol acetic acid, the root-mean-square deviations (RMSD) were observed in the range of 0.023-4.979 and 0.0112-0.614 for the Empirical equation and the Apelblat equation, respectively. For gossypol acetic acid of optical activity, the RMSD were observed in the range of 0.021-2.211 and 0.021-2.243 for the Empirical equation and the Apelblat equation, individually. And the maximum relative average deviation was 7.5%. Both equations offered an accurate mathematical expression of the experimental results. The calculated solubility showed a good relationship with the experimental solubility for most of solvents. This study provided valuable datas not only for optimizing the process of purification of gossypol acetic acid of optical activity in industry but also for further theoretical studies.

  5. What can we learn about dispersion from the conformer surface of n-pentane?

    PubMed

    Martin, Jan M L

    2013-04-11

    In earlier work [Gruzman, D. ; Karton, A.; Martin, J. M. L. J. Phys. Chem. A 2009, 113, 11974], we showed that conformer energies in alkanes (and other systems) are highly dispersion-driven and that uncorrected DFT functionals fail badly at reproducing them, while simple empirical dispersion corrections tend to overcorrect. To gain greater insight into the nature of the phenomenon, we have mapped the torsional surface of n-pentane to 10-degree resolution at the CCSD(T)-F12 level near the basis set limit. The data obtained have been decomposed by order of perturbation theory, excitation level, and same-spin vs opposite-spin character. A large number of approximate electronic structure methods have been considered, as well as several empirical dispersion corrections. Our chief conclusions are as follows: (a) the effect of dispersion is dominated by same-spin correlation (or triplet-pair correlation, from a different perspective); (b) singlet-pair correlation is important for the surface, but qualitatively very dissimilar to the dispersion component; (c) single and double excitations beyond third order are essentially unimportant for this surface; (d) connected triple excitations do play a role but are statistically very similar to the MP2 singlet-pair correlation; (e) the form of the damping function is crucial for good performance of empirical dispersion corrections; (f) at least in the lower-energy regions, SCS-MP2 and especially MP2.5 perform very well; (g) novel spin-component scaled double hybrid functionals such as DSD-PBEP86-D2 acquit themselves very well for this problem.

  6. A Bayes linear Bayes method for estimation of correlated event rates.

    PubMed

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  7. Socio-demographic and academic correlates of clinical reasoning in a dental school in South Africa.

    PubMed

    Postma, T C; White, J G

    2017-02-01

    There are no empirical studies that describe factors that may influence the development of integrated clinical reasoning skills in dental education. Hence, this study examines the association between outcomes of clinical reasoning in relation with differences in instructional design and student factors. Progress test scores, including diagnostic and treatment planning scores, of fourth and fifth year dental students (2009-2011) at the University of Pretoria, South Africa served as the outcome measures in stepwise linear regression analyses. These scores were correlated with the instructional design (lecture-based teaching and learning (LBTL = 0) or case-based teaching and learning (CBTL = 1), students' grades in Oral Biology, indicators of socio-economic status (SES) and gender. CBTL showed an independent association with progress test scores. Oral Biology scores correlated with diagnostic component scores. Diagnostic component scores correlated with treatment planning scores in the fourth year of study but not in the fifth year of study. 'SES' correlated with progress test scores in year five only, while gender showed no correlation. The empirical evidence gathered in this study provides support for scaffolded inductive teaching and learning methods to develop clinical reasoning skills. Knowledge in Oral Biology and reading skills may be important attributes to develop to ensure that students are able to reason accurately in a clinical setting. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Prediction of Environmental Impact of High-Energy Materials with Atomistic Computer Simulations

    DTIC Science & Technology

    2010-11-01

    from a training set of compounds. Other methods include Quantitative Struc- ture-Activity Relationship ( QSAR ) and Quantitative Structure-Property...26 28 the development of QSPR/ QSAR models, in contrast to boiling points and critical parameters derived from empirical correlations, to improve...Quadratic Configuration Interaction Singles Doubles QSAR Quantitative Structure-Activity Relationship QSPR Quantitative Structure-Property

  9. Meeting Contemporary Statistical Needs of Instructional Communication Research: Modeling Teaching and Learning as a Conditional Process. Forum: The Future of Instructional Communication

    ERIC Educational Resources Information Center

    Goodboy, Alan K.

    2017-01-01

    For decades, instructional communication scholars have relied predominantly on cross-sectional survey methods to generate empirical associations between effective teaching and student learning. These studies typically correlate students' perceptions of their instructor's teaching behaviors with subjective self-report assessments of their own…

  10. Empirically Derived Subtypes of Lifetime Anxiety Disorders: Developmental and Clinical Correlates in U.S. Adolescents

    ERIC Educational Resources Information Center

    Burstein, Marcy; Georgiades, Katholiki; Lamers, Femke; Swanson, Sonja A.; Cui, Lihong; He, Jian-Ping; Avenevoli, Shelli; Merikangas, Kathleen R.

    2012-01-01

    Objective: The current study examined the sex- and age-specific structure and comorbidity of lifetime anxiety disorders among U.S. adolescents. Method: The sample consisted of 2,539 adolescents (1,505 females and 1,034 males) from the National Comorbidity Survey-Adolescent Supplement who met criteria for "Diagnostic and Statistical Manual of…

  11. Are Ataques de Nerviosa in Puerto Rican Children Associated with Psychiatric Disorder?

    ERIC Educational Resources Information Center

    Guarnaccia, Peter J.; Martinez, Igda; Ramirez, Rafael; Canino, Glorisa

    2005-01-01

    Objective: To provide the first empirical analysis of a cultural syndrome in children by examining the prevalence and psychiatric correlates of ataques de nervios in an epidemiological study of the mental health of children in Puerto Rico. Method: Probability samples of caretakers of children 4-17 years old in the community (N = 1,892; response…

  12. Piaget's epistemic subject and science education: Epistemological vs. psychological issues

    NASA Astrophysics Data System (ADS)

    Kitchener, Richard F.

    1993-06-01

    Many individuals claim that Piaget's theory of cognitive development is empirically false or substantially disconfirmed by empirical research. Although there is substance to such a claim, any such conclusion must address three increasingly problematic issues about the possibility of providing an empirical test of Piaget's genetic epistemology: (1) the empirical underdetermination of theory by empirical evidence, (2) the empirical difficulty of testing competence-type explanations, and (3) the difficulty of empirically testing epistemic norms. This is especially true of a central epistemic construct in Piaget's theory — the epistemic subject. To illustrate how similar problems of empirical testability arise in the physical sciences, I briefly examine the case of Galileo and the correlative difficulty of empirically testing Galileo's laws. I then point out some important epistemological similarities between Galileo and Piaget together with correlative changes needed in science studies methodology. I conclude that many psychologists and science educators have failed to appreciate the difficulty of falsifying Piaget's theory because they have tacitly adopted a philosophy of science at odds with the paradigm-case of Galileo.

  13. Hypothesis testing for differentially correlated features.

    PubMed

    Sheng, Elisa; Witten, Daniela; Zhou, Xiao-Hua

    2016-10-01

    In a multivariate setting, we consider the task of identifying features whose correlations with the other features differ across conditions. Such correlation shifts may occur independently of mean shifts, or differences in the means of the individual features across conditions. Previous approaches for detecting correlation shifts consider features simultaneously, by computing a correlation-based test statistic for each feature. However, since correlations involve two features, such approaches do not lend themselves to identifying which feature is the culprit. In this article, we instead consider a serial testing approach, by comparing columns of the sample correlation matrix across two conditions, and removing one feature at a time. Our method provides a novel perspective and favorable empirical results compared with competing approaches. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. A comparison of bivariate, multivariate random-effects, and Poisson correlated gamma-frailty models to meta-analyze individual patient data of ordinal scale diagnostic tests.

    PubMed

    Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea

    2017-11-01

    Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Phi-s correlation and dynamic time warping - Two methods for tracking ice floes in SAR images

    NASA Technical Reports Server (NTRS)

    Mcconnell, Ross; Kober, Wolfgang; Kwok, Ronald; Curlander, John C.; Pang, Shirley S.

    1991-01-01

    The authors present two algorithms for performing shape matching on ice floe boundaries in SAR (synthetic aperture radar) images. These algorithms quickly produce a set of ice motion and rotation vectors that can be used to guide a pixel value correlator. The algorithms match a shape descriptor known as the Phi-s curve. The first algorithm uses normalized correlation to match the Phi-s curves, while the second uses dynamic programming to compute an elastic match that better accommodates ice floe deformation. Some empirical data on the performance of the algorithms on Seasat SAR images are presented.

  16. Prediction of unsteady separated flows on oscillating airfoils

    NASA Technical Reports Server (NTRS)

    Mccroskey, W. J.

    1978-01-01

    Techniques for calculating high Reynolds number flow around an airfoil undergoing dynamic stall are reviewed. Emphasis is placed on predicting the values of lift, drag, and pitching moments. Methods discussed include: the discrete potential vortex method; thin boundary layer method; strong interaction between inviscid and viscous flows; and solutions to the Navier-Stokes equations. Empirical methods for estimating unsteady airloads on oscillating airfoils are also described. These methods correlate force and moment data from wind tunnel tests to indicate the effects of various parameters, such as airfoil shape, Mach number, amplitude and frequency of sinosoidal oscillations, mean angle, and type of motion.

  17. Parametrization of semiempirical models against ab initio crystal data: evaluation of lattice energies of nitrate salts.

    PubMed

    Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav

    2005-09-01

    A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.

  18. A correlation method to predict the surface pressure distribution of an infinite plate or a body of revolution from which a jet is issuing

    NASA Technical Reports Server (NTRS)

    Perkins, S. C., Jr.; Mendenhall, M. R.

    1980-01-01

    A correlation method to predict pressures induced on an infinite plate by a jet exhausting normal to the plate into a subsonic free stream was extended to jets exhausting at angles to the plate and to jets exhausting normal to the surface of a body revolution. The complete method consisted of an analytical method which models the blockage and entrainment properties of the jet and an empirical correlation which accounts for viscous effects. For the flat plate case, the method was applicable to jet velocity ratios up to ten, jet inclination angles up to 45 deg from the normal, and radial distances up to five diameters from the jet. For the body of revolution case, the method was applicable to a body at zero degrees angle of attack, jet velocity ratios 1.96 and 3.43, circumferential angles around the body up to 25 deg from the jet, axial distances up to seven diameters from the jet, and jet-to-body diameter ratios less than 0.1.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  20. Correlations of multiscale entropy in the FX market

    NASA Astrophysics Data System (ADS)

    Stosic, Darko; Stosic, Dusan; Ludermir, Teresa; Stosic, Tatijana

    2016-09-01

    The regularity of price fluctuations in exchange rates plays a crucial role in FX market dynamics. Distinct variations in regularity arise from economic, social and political events, such as interday trading and financial crisis. This paper applies a multiscale time-dependent entropy method on thirty-three exchange rates to analyze price fluctuations in the FX. Correlation matrices of entropy values, termed entropic correlations, are in turn used to describe global behavior of the market. Empirical results suggest a weakly correlated market with pronounced collective behavior at bi-weekly trends. Correlations arise from cycles of low and high regularity in long-term trends. Eigenvalues of the correlation matrix also indicate a dominant European market, followed by shifting American, Asian, African, and Pacific influences. As a result, we find that entropy is a powerful tool for extracting important information from the FX market.

  1. An evaluation of dynamic mutuality measurements and methods in cyclic time series

    NASA Astrophysics Data System (ADS)

    Xia, Xiaohua; Huang, Guitian; Duan, Na

    2010-12-01

    Several measurements and techniques have been developed to detect dynamic mutuality and synchronicity of time series in econometrics. This study aims to compare the performances of five methods, i.e., linear regression, dynamic correlation, Markov switching models, concordance index and recurrence quantification analysis, through numerical simulations. We evaluate the abilities of these methods to capture structure changing and cyclicity in time series and the findings of this paper would offer guidance to both academic and empirical researchers. Illustration examples are also provided to demonstrate the subtle differences of these techniques.

  2. Uncertainty quantification of reaction mechanisms accounting for correlations introduced by rate rules and fitted Arrhenius parameters

    DOE PAGES

    Prager, Jens; Najm, Habib N.; Sargsyan, Khachik; ...

    2013-02-23

    We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel-air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel-air mixture at constant-pressure. We also outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. Finally, we examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations,more » and considering both accuracy and computational efficiency.« less

  3. Nonlinear Analysis on Cross-Correlation of Financial Time Series by Continuum Percolation System

    NASA Astrophysics Data System (ADS)

    Niu, Hongli; Wang, Jun

    We establish a financial price process by continuum percolation system, in which we attribute price fluctuations to the investors’ attitudes towards the financial market, and consider the clusters in continuum percolation as the investors share the same investment opinion. We investigate the cross-correlations in two return time series, and analyze the multifractal behaviors in this relationship. Further, we study the corresponding behaviors for the real stock indexes of SSE and HSI as well as the liquid stocks pair of SPD and PAB by comparison. To quantify the multifractality in cross-correlation relationship, we employ multifractal detrended cross-correlation analysis method to perform an empirical research for the simulation data and the real markets data.

  4. Is Pedagogical Content Knowledge (PCK) Necessary for Reformed Science Teaching?: Evidence from an Empirical Study

    ERIC Educational Resources Information Center

    Park, Soonhye; Jang, Jeong-Yoon; Chen, Ying-Chih; Jung, Jinhong

    2011-01-01

    This study tested a hypothesis that focused on whether or not teachers' pedagogical content knowledge (PCK) is a necessary body of knowledge for reformed science teaching. This study utilized a quantitative research method to investigate the correlation between a teacher's PCK level as measured by the PCK rubric (Park et al. 2008) and the degree…

  5. Hypothesis tests for the detection of constant speed radiation moving sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less

  6. Gravity Tides Extracted from Relative Gravimeter Data by Combining Empirical Mode Decomposition and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Hongjuan; Guo, Jinyun; Kong, Qiaoli; Chen, Xiaodong

    2018-04-01

    The static observation data from a relative gravimeter contain noise and signals such as gravity tides. This paper focuses on the extraction of the gravity tides from the static relative gravimeter data for the first time applying the combined method of empirical mode decomposition (EMD) and independent component analysis (ICA), called the EMD-ICA method. The experimental results from the CG-5 gravimeter (SCINTREX Limited Ontario Canada) data show that the gravity tides time series derived by EMD-ICA are consistent with the theoretical reference (Longman formula) and the RMS of their differences only reaches 4.4 μGal. The time series of the gravity tides derived by EMD-ICA have a strong correlation with the theoretical time series and the correlation coefficient is greater than 0.997. The accuracy of the gravity tides estimated by EMD-ICA is comparable to the theoretical model and is slightly higher than that of independent component analysis (ICA). EMD-ICA could overcome the limitation of ICA having to process multiple observations and slightly improve the extraction accuracy and reliability of gravity tides from relative gravimeter data compared to that estimated with ICA.

  7. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    PubMed

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  8. Long memory of abnormal investor attention and the cross-correlations between abnormal investor attention and trading volume, volatility respectively

    NASA Astrophysics Data System (ADS)

    Fan, Xiaoqian; Yuan, Ying; Zhuang, Xintian; Jin, Xiu

    2017-03-01

    Taking Baidu Index as a proxy for abnormal investor attention (AIA), the long memory property in the AIA of Shanghai Stock Exchange (SSE) 50 Index component stocks was empirically investigated using detrended fluctuation analysis (DFA) method. The results show that abnormal investor attention is power-law correlated with Hurst exponents between 0.64 and 0.98. Furthermore, the cross-correlations between abnormal investor attention and trading volume, volatility respectively are studied using detrended cross-correlation analysis (DCCA) and the DCCA cross-correlation coefficient (ρDCCA). The results suggest that there are positive correlations between AIA and trading volume, volatility respectively. In addition, the correlations for trading volume are in general higher than the ones for volatility. By carrying on rescaled range analysis (R/S) and rolling windows analysis, we find that the results mentioned above are effective and significant.

  9. Psychophysical Reverse Correlation with Multiple Response Alternatives

    PubMed Central

    Dai, Huanping; Micheyl, Christophe

    2011-01-01

    Psychophysical reverse-correlation methods such as the “classification image” technique provide a unique tool to uncover the internal representations and decision strategies of individual participants in perceptual tasks. Over the last thirty years, these techniques have gained increasing popularity among both visual and auditory psychophysicists. However, thus far, principled applications of the psychophysical reverse-correlation approach have been almost exclusively limited to two-alternative decision (detection or discrimination) tasks. Whether and how reverse-correlation methods can be applied to uncover perceptual templates and decision strategies in situations involving more than just two response alternatives remains largely unclear. Here, the authors consider the problem of estimating perceptual templates and decision strategies in stimulus identification tasks with multiple response alternatives. They describe a modified correlational approach, which can be used to solve this problem. The approach is evaluated under a variety of simulated conditions, including different ratios of internal-to-external noise, different degrees of correlations between the sensory observations, and various statistical distributions of stimulus perturbations. The results indicate that the proposed approach is reasonably robust, suggesting that it could be used in future empirical studies. PMID:20695712

  10. Empirical Mining of Large Data Sets Already Helps to Solve Practical Ecological Problems; A Panoply of Working Examples (Invited)

    NASA Astrophysics Data System (ADS)

    Hargrove, W. W.; Hoffman, F. M.; Kumar, J.; Spruce, J.; Norman, S. P.

    2013-12-01

    Here we present diverse examples where empirical mining and statistical analysis of large data sets have already been shown to be useful for a wide variety of practical decision-making problems within the realm of large-scale ecology. Because a full understanding and appreciation of particular ecological phenomena are possible only after hypothesis-directed research regarding the existence and nature of that process, some ecologists may feel that purely empirical data harvesting may represent a less-than-satisfactory approach. Restricting ourselves exclusively to process-driven approaches, however, may actually slow progress, particularly for more complex or subtle ecological processes. We may not be able to afford the delays caused by such directed approaches. Rather than attempting to formulate and ask every relevant question correctly, empirical methods allow trends, relationships and associations to emerge freely from the data themselves, unencumbered by a priori theories, ideas and prejudices that have been imposed upon them. Although they cannot directly demonstrate causality, empirical methods can be extremely efficient at uncovering strong correlations with intermediate "linking" variables. In practice, these correlative structures and linking variables, once identified, may provide sufficient predictive power to be useful themselves. Such correlation "shadows" of causation can be harnessed by, e.g., Bayesian Belief Nets, which bias ecological management decisions, made with incomplete information, toward favorable outcomes. Empirical data-harvesting also generates a myriad of testable hypotheses regarding processes, some of which may even be correct. Quantitative statistical regionalizations based on quantitative multivariate similarity have lended insights into carbon eddy-flux direction and magnitude, wildfire biophysical conditions, phenological ecoregions useful for vegetation type mapping and monitoring, forest disease risk maps (e.g., sudden oak death), global aquatic ecoregion risk maps for aquatic invasives, and forest vertical structure ecoregions (e.g., using extensive LiDAR data sets). Multivariate Spatio-Temporal Clustering, which quantitatively places alternative future conditions on a common footing with present conditions, allows prediction of present and future shifts in tree species ranges, given alternative climatic change forecasts. ForWarn, a forest disturbance detection and monitoring system mining 12 years of national 8-day MODIS phenology data, has been operating since 2010, producing national maps every 8 days showing many kinds of potential forest disturbances. Forest resource managers can view disturbance maps via a web-based viewer, and alerts are issued when particular forest disturbances are seen. Regression-based decadal trend analysis showing long-term forest thrive and decline areas, and individual-based, brute-force supercomputing to map potential movement corridors and migration routes across landscapes will also be discussed. As significant ecological changes occur with increasing rapidity, such empirical data-mining approaches may be the most efficient means to help land managers find the best, most-actionable policies and decision strategies.

  11. Mental workload prediction based on attentional resource allocation and information processing.

    PubMed

    Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin

    2015-01-01

    Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.

  12. Semi-empirical quantum evaluation of peptide - MHC class II binding

    NASA Astrophysics Data System (ADS)

    González, Ronald; Suárez, Carlos F.; Bohórquez, Hugo J.; Patarroyo, Manuel A.; Patarroyo, Manuel E.

    2017-01-01

    Peptide presentation by the major histocompatibility complex (MHC) is a key process for triggering a specific immune response. Studying peptide-MHC (pMHC) binding from a structural-based approach has potential for reducing the costs of investigation into vaccine development. This study involved using two semi-empirical quantum chemistry methods (PM7 and FMO-DFTB) for computing the binding energies of peptides bonded to HLA-DR1 and HLA-DR2. We found that key stabilising water molecules involved in the peptide binding mechanism were required for finding high correlation with IC50 experimental values. Our proposal is computationally non-intensive, and is a reliable alternative for studying pMHC binding interactions.

  13. Noise reduction in Lidar signal using correlation-based EMD combined with soft thresholding and roughness penalty

    NASA Astrophysics Data System (ADS)

    Chang, Jianhua; Zhu, Lingyan; Li, Hongxu; Xu, Fan; Liu, Binggang; Yang, Zhenbo

    2018-01-01

    Empirical mode decomposition (EMD) is widely used to analyze the non-linear and non-stationary signals for noise reduction. In this study, a novel EMD-based denoising method, referred to as EMD with soft thresholding and roughness penalty (EMD-STRP), is proposed for the Lidar signal denoising. With the proposed method, the relevant and irrelevant intrinsic mode functions are first distinguished via a correlation coefficient. Then, the soft thresholding technique is applied to the irrelevant modes, and the roughness penalty technique is applied to the relevant modes to extract as much information as possible. The effectiveness of the proposed method was evaluated using three typical signals contaminated by white Gaussian noise. The denoising performance was then compared to the denoising capabilities of other techniques, such as correlation-based EMD partial reconstruction, correlation-based EMD hard thresholding, and wavelet transform. The use of EMD-STRP on the measured Lidar signal resulted in the noise being efficiently suppressed, with an improved signal to noise ratio of 22.25 dB and an extended detection range of 11 km.

  14. Silver diagnosis in neuropathology: principles, practice and revised interpretation

    PubMed Central

    2007-01-01

    Silver-staining methods are helpful for histological identification of pathological deposits. In spite of some ambiguities regarding their mechanism and interpretation, they are widely used for histopathological diagnosis. In this review, four major silver-staining methods, modified Bielschowsky, Bodian, Gallyas (GAL) and Campbell–Switzer (CS) methods, are outlined with respect to their principles, basic protocols and interpretations, thereby providing neuropathologists, technicians and neuroscientists with a common basis for comparing findings and identifying the issues that still need to be clarified. Some consider “argyrophilia” to be a homogeneous phenomenon irrespective of the lesion and the method. Thus, they seek to explain the differences among the methods by pointing to their different sensitivities in detecting lesions (quantitative difference). Comparative studies, however, have demonstrated that argyrophilia is heterogeneous and dependent not only on the method but also on the lesion (qualitative difference). Each staining method has its own lesion-dependent specificity and, within this specificity, its own sensitivity. This “method- and lesion-dependent” nature of argyrophilia enables operational sorting of disease-specific lesions based on their silver-staining profiles, which may potentially represent some disease-specific aspects. Furthermore, comparisons between immunohistochemical and biochemical data have revealed an empirical correlation between GAL+/CS-deposits and 4-repeat (4R) tau (corticobasal degeneration, progressive supranuclear palsy and argyrophilic grains) and its complementary reversal between GAL-/CS+deposits and 3-repeat (3R) tau (Pick bodies). Deposits containing both 3R and 4R tau (neurofibrillary tangles of Alzheimer type) are GAL+/CS+. Although no molecular explanations, other than these empiric correlations, are currently available, these distinctive features, especially when combined with immunohistochemistry, are useful because silver-staining methods and immunoreactions are complementary to each other. PMID:17401570

  15. Experimental study and empirical prediction of fuel flow parameters under air evolution conditions

    NASA Astrophysics Data System (ADS)

    Kitanina, E. E.; Kitanin, E. L.; Bondarenko, D. A.; Kravtsov, P. A.; Peganova, M. M.; Stepanov, S. G.; Zherebzov, V. L.

    2017-11-01

    Air evolution in kerosene under the effect of gravity flow with various hydraulic resistances in the pipeline was studied experimentally. The study was conducted at pressure ranging from 0.2 to 1.0 bar and temperature varying between -20°C and +20°C. Through these experiments, the oversaturation limit beyond which dissolved air starts evolving intensively from the fuel was established and the correlations for the calculation of pressure losses and air evolution on local loss elements were obtained. A method of calculating two-phase flow behaviour in a titled pipeline segment with very low mass flow quality and fairly high volume flow quality was developed. The complete set of empirical correlations obtained by experimental analysis was implemented in the engineering code. The software simulation results were repeatedly verified against our experimental findings and Airbus test data to show that the two-phase flow simulation agrees quite well with the experimental results obtained in the complex branched pipelines.

  16. Spatial correlation of auroral zone geomagnetic variations

    NASA Astrophysics Data System (ADS)

    Jackel, B. J.; Davalos, A.

    2016-12-01

    Magnetic field perturbations in the auroral zone are produced by a combination of distant ionospheric and local ground induced currents. Spatial and temporal structure of these currents is scientifically interesting and can also have a significant influence on critical infrastructure.Ground-based magnetometer networks are an essential tool for studying these phenomena, with the existing complement of instruments in Canada providing extended local time coverage. In this study we examine the spatial correlation between magnetic field observations over a range of scale lengths. Principal component and canonical correlation analysis are used to quantify relationships between multiple sites. Results could be used to optimize network configurations, validate computational models, and improve methods for empirical interpolation.

  17. Cultural Validity of the Minnesota Multiphasic Personality Inventory-2 Empirical Correlates: Is This the Best We Can Do?

    ERIC Educational Resources Information Center

    Hill, Jill S.; Robbins, Rockey R.; Pace, Terry M.

    2012-01-01

    This article critically reviews empirical correlates of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989), based on several validation studies conducted with different racial, ethnic, and cultural groups. A major critique of the reviewed MMPI-2 studies was focused on the use of…

  18. Survey and analysis of research on supersonic drag-due-to-lift minimization with recommendations for wing design

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Mann, Michael J.

    1992-01-01

    A survey of research on drag-due-to-lift minimization at supersonic speeds, including a study of the effectiveness of current design and analysis methods was conducted. The results show that a linearized theory analysis with estimated attainable thrust and vortex force effects can predict with reasonable accuracy the lifting efficiency of flat wings. Significantly better wing performance can be achieved through the use of twist and camber. Although linearized theory methods tend to overestimate the amount of twist and camber required for a given application and provide an overly optimistic performance prediction, these deficiencies can be overcome by implementation of recently developed empirical corrections. Numerous examples of the correlation of experiment and theory are presented to demonstrate the applicability and limitations of linearized theory methods with and without empirical corrections. The use of an Euler code for the estimation of aerodynamic characteristics of a twisted and cambered wing and its application to design by iteration are discussed.

  19. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  20. Viability of using seismic data to predict hydrogeological parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mela, K.

    1997-10-01

    Design of modem contaminant mitigation and fluid extraction projects make use of solutions from stochastic hydrogeologic models. These models rely heavily on the hydraulic parameters of hydraulic conductivity and the correlation length of hydraulic conductivity. Reliable values of these parameters must be acquired to successfully predict flow of fluids through the aquifer of interest. An inexpensive method of acquiring these parameters by use of seismic reflection surveying would be beneficial. Relationships between seismic velocity and porosity together with empirical observations relating porosity to permeability may lead to a method of extracting the correlation length of hydraulic conductivity from shallow highmore » resolution seismic data making the use of inexpensive high density data sets commonplace for these studies.« less

  1. Evaluation of empirical rule of linearly correlated peptide selection (ERLPS) for proteotypic peptide-based quantitative proteomics.

    PubMed

    Liu, Kehui; Zhang, Jiyang; Fu, Bin; Xie, Hongwei; Wang, Yingchun; Qian, Xiaohong

    2014-07-01

    Precise protein quantification is essential in comparative proteomics. Currently, quantification bias is inevitable when using proteotypic peptide-based quantitative proteomics strategy for the differences in peptides measurability. To improve quantification accuracy, we proposed an "empirical rule for linearly correlated peptide selection (ERLPS)" in quantitative proteomics in our previous work. However, a systematic evaluation on general application of ERLPS in quantitative proteomics under diverse experimental conditions needs to be conducted. In this study, the practice workflow of ERLPS was explicitly illustrated; different experimental variables, such as, different MS systems, sample complexities, sample preparations, elution gradients, matrix effects, loading amounts, and other factors were comprehensively investigated to evaluate the applicability, reproducibility, and transferability of ERPLS. The results demonstrated that ERLPS was highly reproducible and transferable within appropriate loading amounts and linearly correlated response peptides should be selected for each specific experiment. ERLPS was used to proteome samples from yeast to mouse and human, and in quantitative methods from label-free to O18/O16-labeled and SILAC analysis, and enabled accurate measurements for all proteotypic peptide-based quantitative proteomics over a large dynamic range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Volatility-constrained multifractal detrended cross-correlation analysis: Cross-correlation among Mainland China, US, and Hong Kong stock markets

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Zhang, Minjia; Li, Qingchen

    2017-04-01

    This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.

  3. Theoretical and conformational studies of a series of cannabinoids

    NASA Astrophysics Data System (ADS)

    Da Silva, Albérico B. F.; Trsic, Milan

    1995-11-01

    The MNDO semi-empirical method is applied to the study of a series of cannabinoids with the aim of providing an improved understanding of the structure-activity relationship (SAR). The conformation of some groups that seem important in the biological activity (psychoactivity) of these compounds is characterized. Some electronic properties, such as atomic net charges and HOMO and LUMO energies, are correlated with the psychoactive effect.

  4. Winter Precipitation Forecast in the European and Mediterranean Regions Using Cluster Analysis

    NASA Astrophysics Data System (ADS)

    Totz, Sonja; Tziperman, Eli; Coumou, Dim; Pfeiffer, Karl; Cohen, Judah

    2017-12-01

    The European climate is changing under global warming, and especially the Mediterranean region has been identified as a hot spot for climate change with climate models projecting a reduction in winter rainfall and a very pronounced increase in summertime heat waves. These trends are already detectable over the historic period. Hence, it is beneficial to forecast seasonal droughts well in advance so that water managers and stakeholders can prepare to mitigate deleterious impacts. We developed a new cluster-based empirical forecast method to predict precipitation anomalies in winter. This algorithm considers not only the strength but also the pattern of the precursors. We compare our algorithm with dynamic forecast models and a canonical correlation analysis-based prediction method demonstrating that our prediction method performs better in terms of time and pattern correlation in the Mediterranean and European regions.

  5. Metaheuristic optimization approaches to predict shear-wave velocity from conventional well logs in sandstone and carbonate case studies

    NASA Astrophysics Data System (ADS)

    Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi

    2018-06-01

    Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee colony algorithm in VS prediction is slightly higher than two other alternative employed approaches.

  6. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  7. Measuring farm sustainability using data envelope analysis with principal components: the case of Wisconsin cranberry.

    PubMed

    Dong, Fengxia; Mitchell, Paul D; Colquhoun, Jed

    2015-01-01

    Measuring farm sustainability performance is a crucial component for improving agricultural sustainability. While extensive assessments and indicators exist that reflect the different facets of agricultural sustainability, because of the relatively large number of measures and interactions among them, a composite indicator that integrates and aggregates over all variables is particularly useful. This paper describes and empirically evaluates a method for constructing a composite sustainability indicator that individually scores and ranks farm sustainability performance. The method first uses non-negative polychoric principal component analysis to reduce the number of variables, to remove correlation among variables and to transform categorical variables to continuous variables. Next the method applies common-weight data envelope analysis to these principal components to individually score each farm. The method solves weights endogenously and allows identifying important practices in sustainability evaluation. An empirical application to Wisconsin cranberry farms finds heterogeneity in sustainability practice adoption, implying that some farms could adopt relevant practices to improve the overall sustainability performance of the industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Weng, Yingliang

    2016-01-01

    This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.

  9. Systematic Interpolation Method Predicts Antibody Monomer-Dimer Separation by Gradient Elution Chromatography at High Protein Loads.

    PubMed

    Creasy, Arch; Reck, Jason; Pabst, Timothy; Hunter, Alan; Barker, Gregory; Carta, Giorgio

    2018-05-29

    A previously developed empirical interpolation (EI) method is extended to predict highly overloaded multicomponent elution behavior on a cation exchange (CEX) column based on batch isotherm data. Instead of a fully mechanistic model, the EI method employs an empirically modified multicomponent Langmuir equation to correlate two-component adsorption isotherm data at different salt concentrations. Piecewise cubic interpolating polynomials are then used to predict competitive binding at intermediate salt concentrations. The approach is tested for the separation of monoclonal antibody monomer and dimer mixtures by gradient elution on the cation exchange resin Nuvia HR-S. Adsorption isotherms are obtained over a range of salt concentrations with varying monomer and dimer concentrations. Coupled with a lumped kinetic model, the interpolated isotherms predict the column behavior for highly overloaded conditions. Predictions based on the EI method showed good agreement with experimental elution curves for protein loads up to 40 mg/mL column or about 50% of the column binding capacity. The approach can be extended to other chromatographic modalities and to more than two components. This article is protected by copyright. All rights reserved.

  10. The study and development of the empirical correlations equation of natural convection heat transfer on vertical rectangular sub-channels

    NASA Astrophysics Data System (ADS)

    Kamajaya, Ketut; Umar, Efrizon; Sudjatmi, K. S.

    2012-06-01

    This study focused on natural convection heat transfer using a vertical rectangular sub-channel and water as the coolant fluid. To conduct this study has been made pipe heaters are equipped with thermocouples. Each heater is equipped with five thermocouples along the heating pipes. The diameter of each heater is 2.54 cm and 45 cm in length. The distance between the central heating and the pitch is 29.5 cm. Test equipment is equipped with a primary cooling system, a secondary cooling system and a heat exchanger. The purpose of this study is to obtain new empirical correlations equations of the vertical rectangular sub-channel, especially for the natural convection heat transfer within a bundle of vertical cylinders rectangular arrangement sub-channels. The empirical correlation equation can support the thermo-hydraulic analysis of research nuclear reactors that utilize cylindrical fuel rods, and also can be used in designing of baffle-free vertical shell and tube heat exchangers. The results of this study that the empirical correlation equations of natural convection heat transfer coefficients with rectangular arrangement is Nu = 6.3357 (Ra.Dh/x)0.0740.

  11. [Determination of ventricular volumes by a non-geometric method using gamma-cineangiography].

    PubMed

    Faivre, R; Cardot, J C; Baud, M; Verdenet, J; Berthout, P; Bidet, A C; Bassand, J P; Maurat, J P

    1985-08-01

    The authors suggest a new way of determining ventricular volume by a non-geometric method using gamma-cineangiography. The results obtained by this method were compared with those obtained by a geometric methods and contrast ventriculography in 94 patients. The new non-geometric method supposes that the radioactive tracer is evenly distributed in the cardiovascular system so that blood radioactivity levels can be measured. The ventricular volume is then equal to the ratio of radioactivity in the LV zone to that of 1 ml of blood. Comparison of the radionuclide and angiographic data in the first 60 patients showed systematic values--despite a satisfactory statistical correlation (r = 0.87, y = 0.30 X + 6.3). This underestimation is due to the phenomenon of attenuation related to the depth of the heart in the thoracic cage and to autoabsorption at source, the degree of which depends on the ventricular volume. An empirical method of calculation allows correction for these factors by taking into account absorption in the tissues by relating to body surface area and autoabsorption at source by correcting for the surface of isotopic ventricular projection expressed in pixels. Using the data of this empirical method, the correction formula for radionuclide ventricular volume is obtained by a multiple linear regression: corrected radionuclide volume = K X measured radionuclide volume (Formula: see text). This formula was applied in the following 34 patients. The correlation between the uncorrected and corrected radionuclide volumes and the angiographic volumes was improved (r = 0.65 vs r = 0.94) and the values were more accurate (y = 0.18 X + 26 vs y = 0.96 X + 1.5).(ABSTRACT TRUNCATED AT 250 WORDS)

  12. Empirical mode decomposition and long-range correlation analysis of sunspot time series

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Leung, Yee

    2010-12-01

    Sunspots, which are the best known and most variable features of the solar surface, affect our planet in many ways. The number of sunspots during a period of time is highly variable and arouses strong research interest. When multifractal detrended fluctuation analysis (MF-DFA) is employed to study the fractal properties and long-range correlation of the sunspot series, some spurious crossover points might appear because of the periodic and quasi-periodic trends in the series. However many cycles of solar activities can be reflected by the sunspot time series. The 11-year cycle is perhaps the most famous cycle of the sunspot activity. These cycles pose problems for the investigation of the scaling behavior of sunspot time series. Using different methods to handle the 11-year cycle generally creates totally different results. Using MF-DFA, Movahed and co-workers employed Fourier truncation to deal with the 11-year cycle and found that the series is long-range anti-correlated with a Hurst exponent, H, of about 0.12. However, Hu and co-workers proposed an adaptive detrending method for the MF-DFA and discovered long-range correlation characterized by H≈0.74. In an attempt to get to the bottom of the problem in the present paper, empirical mode decomposition (EMD), a data-driven adaptive method, is applied to first extract the components with different dominant frequencies. MF-DFA is then employed to study the long-range correlation of the sunspot time series under the influence of these components. On removing the effects of these periods, the natural long-range correlation of the sunspot time series can be revealed. With the removal of the 11-year cycle, a crossover point located at around 60 months is discovered to be a reasonable point separating two different time scale ranges, H≈0.72 and H≈1.49. And on removing all cycles longer than 11 years, we have H≈0.69 and H≈0.28. The three cycle-removing methods—Fourier truncation, adaptive detrending and the proposed EMD-based method—are further compared, and possible reasons for the different results are given. Two numerical experiments are designed for quantitatively evaluating the performances of these three methods in removing periodic trends with inexact/exact cycles and in detecting the possible crossover points.

  13. Statistical analysis of geomagnetic field intensity differences between ASM and VFM instruments onboard Swarm constellation

    NASA Astrophysics Data System (ADS)

    De Michelis, Paola; Tozzi, Roberta; Consolini, Giuseppe

    2017-02-01

    From the very first measurements made by the magnetometers onboard Swarm satellites launched by European Space Agency (ESA) in late 2013, it emerged a discrepancy between scalar and vector measurements. An accurate analysis of this phenomenon brought to build an empirical model of the disturbance, highly correlated with the Sun incidence angle, and to correct vector data accordingly. The empirical model adopted by ESA results in a significant decrease in the amplitude of the disturbance affecting VFM measurements so greatly improving the vector magnetic data quality. This study is focused on the characterization of the difference between magnetic field intensity measured by the absolute scalar magnetometer (ASM) and that reconstructed using the vector field magnetometer (VFM) installed on Swarm constellation. Applying empirical mode decomposition method, we find the intrinsic mode functions (IMFs) associated with ASM-VFM total intensity differences obtained with data both uncorrected and corrected for the disturbance correlated with the Sun incidence angle. Surprisingly, no differences are found in the nature of the IMFs embedded in the analyzed signals, being these IMFs characterized by the same dominant periodicities before and after correction. The effect of correction manifests in the decrease in the energy associated with some IMFs contributing to corrected data. Some IMFs identified by analyzing the ASM-VFM intensity discrepancy are characterized by the same dominant periodicities of those obtained by analyzing the temperature fluctuations of the VFM electronic unit. Thus, the disturbance correlated with the Sun incidence angle could be still present in the corrected magnetic data. Furthermore, the ASM-VFM total intensity difference and the VFM electronic unit temperature display a maximal shared information with a time delay that depends on local time. Taken together, these findings may help to relate the features of the observed VFM-ASM total intensity difference to the physical characteristics of the real disturbance thus contributing to improve the empirical model proposed for the correction of data.[Figure not available: see fulltext.

  14. Empirical correlations for axial dispersion coefficient and Peclet number in fixed-bed columns.

    PubMed

    Rastegar, Seyed Omid; Gu, Tingyue

    2017-03-24

    In this work, a new correlation for the axial dispersion coefficient was obtained using experimental data in the literature for axial dispersion in fixed-bed columns packed with particles. The Chung and Wen correlation, the De Ligny correlation are two popular empirical correlations. However, the former lacks the molecular diffusion term and the latter does not consider bed voidage. The new axial dispersion coefficient correlation in this work was based on additional experimental data in the literature by considering both molecular diffusion and bed voidage. It is more comprehensive and accurate. The Peclet number correlation from the new axial dispersion coefficient correlation on the average leads to 12% lower Peclet number values compared to the values from the Chung and Wen correlation, and in many cases much smaller than those from the De Ligny correlation. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  16. A robust hypothesis test for the sensitive detection of constant speed radiation moving sources

    NASA Astrophysics Data System (ADS)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence

    2015-09-01

    Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.

  17. Updates on Force Limiting Improvements

    NASA Technical Reports Server (NTRS)

    Kolaini, Ali R.; Scharton, Terry

    2013-01-01

    The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.

  18. How Do Different Ways of Measuring Individual Differences in Zero-Acquaintance Personality Judgment Accuracy Correlate With Each Other?

    PubMed

    Hall, Judith A; Back, Mitja D; Nestler, Steffen; Frauendorfer, Denise; Schmid Mast, Marianne; Ruben, Mollie A

    2018-04-01

    This research compares two different approaches that are commonly used to measure accuracy of personality judgment: the trait accuracy approach wherein participants discriminate among targets on a given trait, thus making intertarget comparisons, and the profile accuracy approach wherein participants discriminate between traits for a given target, thus making intratarget comparisons. We examined correlations between these methods as well as correlations among accuracies for judging specific traits. The present article documents relations among these approaches based on meta-analysis of five studies of zero-acquaintance impressions of the Big Five traits. Trait accuracies correlated only weakly with overall and normative profile accuracy. Substantial convergence between the trait and profile accuracy methods was only found when an aggregate of all five trait accuracies was correlated with distinctive profile accuracy. Importantly, however, correlations between the trait and profile accuracy approaches were reduced to negligibility when statistical overlap was corrected by removing the respective trait from the profile correlations. Moreover, correlations of the separate trait accuracies with each other were very weak. Different ways of measuring individual differences in personality judgment accuracy are not conceptually and empirically the same, but rather represent distinct abilities that rely on different judgment processes. © 2017 Wiley Periodicals, Inc.

  19. Weighted analysis of paired microarray experiments.

    PubMed

    Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle

    2005-01-01

    In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.

  20. Gaussian Elimination-Based Novel Canonical Correlation Analysis Method for EEG Motion Artifact Removal.

    PubMed

    Roy, Vandana; Shukla, Shailja; Shukla, Piyush Kumar; Rawat, Paresh

    2017-01-01

    The motion generated at the capturing time of electro-encephalography (EEG) signal leads to the artifacts, which may reduce the quality of obtained information. Existing artifact removal methods use canonical correlation analysis (CCA) for removing artifacts along with ensemble empirical mode decomposition (EEMD) and wavelet transform (WT). A new approach is proposed to further analyse and improve the filtering performance and reduce the filter computation time under highly noisy environment. This new approach of CCA is based on Gaussian elimination method which is used for calculating the correlation coefficients using backslash operation and is designed for EEG signal motion artifact removal. Gaussian elimination is used for solving linear equation to calculate Eigen values which reduces the computation cost of the CCA method. This novel proposed method is tested against currently available artifact removal techniques using EEMD-CCA and wavelet transform. The performance is tested on synthetic and real EEG signal data. The proposed artifact removal technique is evaluated using efficiency matrices such as del signal to noise ratio (DSNR), lambda ( λ ), root mean square error (RMSE), elapsed time, and ROC parameters. The results indicate suitablity of the proposed algorithm for use as a supplement to algorithms currently in use.

  1. Qualitative Investigation of the Earthquake Precuesors Prior to the March 14,2012 Earthquake in Japan

    NASA Astrophysics Data System (ADS)

    Raghuwanshi, Shailesh Kumar; Gwal, Ashok Kumar

    Abstract: In this study we have used the Empirical Mode Decomposition (EMD) method in conjunction with the Cross Correlation analysis to analyze ionospheric foF2 parameter Japan earthquake with magnitude M = 6.9. The data are collected from Kokubunji (35.70N, 139.50E) and Yamakawa (31.20N, 130.60E) ionospheric stations. The EMD method was used for removing the geophysical noise from the foF2 data and then to calculate the correlation coefficient between them. It was found that the ionospheric foF2 parameter shows anomalous change few days before the earthquake. The results are in agreement with the theoretical model evidencing ionospheric modification prior to Japan earthquake in a certain area around the epicenter.

  2. Thermal Conductivity of Metallic Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hin, Celine

    This project has developed a modeling and simulation approaches to predict the thermal conductivity of metallic fuels and their alloys. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Bothmore » methods were complementary. The models incorporated both phonon and electron contributions. Good agreement with experimental data over a wide temperature range were found. The models also provided insight into the different physical factors that govern the thermal conductivity under different temperatures. The models were general enough to incorporate more complex effects like additional alloying species, defects, transmutation products and noble gas bubbles to predict the behavior of complex metallic alloys like U-alloy fuel systems under burnup. 3 Introduction Thermal conductivity is an important thermal physical property affecting the performance and efficiency of metallic fuels [1]. Some experimental measurement of thermal conductivity and its correlation with composition and temperature from empirical fitting are available for U, Zr and their alloys with Pu and other minor actinides. However, as reviewed in by Kim, Cho and Sohn [2], due to the difficulty in doing experiments on actinide materials, thermal conductivities of metallic fuels have only been measured at limited alloy compositions and temperatures, some of them even being negative and unphysical. Furthermore, the correlations developed so far are empirical in nature and may not be accurate when used for prediction at conditions far from those used in the original fitting. Moreover, as fuels burn up in the reactor and fission products are built up, thermal conductivity is also significantly changed [3]. Unfortunately, fundamental understanding of the effect of fission products is also currently lacking. In this project, we probe thermal conductivity of metallic fuels with ab initio calculations, a theoretical tool with the potential to yield better accuracy and predictive power than empirical fitting. This work will both complement experimental data by determining thermal conductivity in wider composition and temperature ranges than is available experimentally, and also develop mechanistic understanding to guide better design of metallic fuels in the future. So far, we focused on α-U perfect crystal, the ground-state phase of U metal. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Both methods were complementary and very helpful to understand the physics behind the thermal conductivity in metallic uranium and other materials with similar characteristics. In Section I, the combined model developed at UWM is explained. In Section II, the ab-initio method developed at VT is described along with the uranium pseudo-potential and its validation. Section III is devoted to the work done by Jianguo Yu at INL. Finally, we will present the performance of the project in terms of milestones, publications, and presentations.« less

  3. Is the Critical Shields Stress for Incipient Sediment Motion Dependent on Bed Slope in Natural Channels? No.

    NASA Astrophysics Data System (ADS)

    Phillips, C. B.; Jerolmack, D. J.

    2017-12-01

    Understanding when coarse sediment begins to move in a river is essential for linking rivers to the evolution of mountainous landscapes. Unfortunately, the threshold of surface particle motion is notoriously difficult to measure in the field. However, recent studies have shown that the threshold of surface motion is empirically correlated with channel slope, a property that is easy to measure and readily available from the literature. These studies have thoroughly examined the mechanistic underpinnings behind the observed correlation and produced suitably complex models. These models are difficult to implement for natural rivers using widely available data, and thus others have treated the empirical regression between slope and the threshold of motion as a predictive model. We note that none of the authors of the original studies exploring this correlation suggested their empirical regressions be used in a predictive fashion, nevertheless these regressions between slope and the threshold of motion have found their way into numerous recent studies engendering potentially spurious conclusions. We demonstrate that there are two significant problems with using these empirical equations for prediction: (1) the empirical regressions are based on a limited sampling of the phase space of bed-load rivers and (2) the empirical measurements of bankfull and critical shear stresses are paired. The upshot of these problems limits the empirical relations predictive capacity to field sites drawn from the same region of the bed-load river phase space and that the paired nature of the data introduces a spurious correlation when considering the ratio of bankfull to critical shear stress. Using a large compilation of bed-load river hydraulic geometry data, we demonstrate that the variation within independently measured values of the threshold of motion changes systematically with bankfull shields stress and not channel slope. Additionally, we highlight using several recent datasets the potential pitfalls that one can encounter when using simplistic empirical regressions to predict the threshold of motion showing that while these concerns could be construed as subtle the resulting implications can be substantial.

  4. A mini-review on econophysics: Comparative study of Chinese and western financial markets

    NASA Astrophysics Data System (ADS)

    Zheng, Bo; Jiang, Xiong-Fei; Ni, Peng-Yun

    2014-07-01

    We present a review of our recent research in econophysics, and focus on the comparative study of Chinese and western financial markets. By virtue of concepts and methods in statistical physics, we investigate the time correlations and spatial structure of financial markets based on empirical high-frequency data. We discover that the Chinese stock market shares common basic properties with the western stock markets, such as the fat-tail probability distribution of price returns, the long-range auto-correlation of volatilities, and the persistence probability of volatilities, while it exhibits very different higher-order time correlations of price returns and volatilities, spatial correlations of individual stock prices, and large-fluctuation dynamic behaviors. Furthermore, multi-agent-based models are developed to simulate the microscopic interaction and dynamic evolution of the stock markets.

  5. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  6. A literature review of empirical research on learning analytics in medical education

    PubMed Central

    Saqr, Mohammed

    2018-01-01

    The number of publications in the field of medical education is still markedly low, despite recognition of the value of the discipline in the medical education literature, and exponential growth of publications in other fields. This necessitates raising awareness of the research methods and potential benefits of learning analytics (LA). The aim of this paper was to offer a methodological systemic review of empirical LA research in the field of medical education and a general overview of the common methods used in the field in general. Search was done in Medline database using the term “LA.” Inclusion criteria included empirical original research articles investigating LA using qualitative, quantitative, or mixed methodologies. Articles were also required to be written in English, published in a scholarly peer-reviewed journal and have a dedicated section for methods and results. A Medline search resulted in only six articles fulfilling the inclusion criteria for this review. Most of the studies collected data about learners from learning management systems or online learning resources. Analysis used mostly quantitative methods including descriptive statistics, correlation tests, and regression models in two studies. Patterns of online behavior and usage of the digital resources as well as predicting achievement was the outcome most studies investigated. Research about LA in the field of medical education is still in infancy, with more questions than answers. The early studies are encouraging and showed that patterns of online learning can be easily revealed as well as predicting students’ performance. PMID:29599699

  7. A literature review of empirical research on learning analytics in medical education.

    PubMed

    Saqr, Mohammed

    2018-01-01

    The number of publications in the field of medical education is still markedly low, despite recognition of the value of the discipline in the medical education literature, and exponential growth of publications in other fields. This necessitates raising awareness of the research methods and potential benefits of learning analytics (LA). The aim of this paper was to offer a methodological systemic review of empirical LA research in the field of medical education and a general overview of the common methods used in the field in general. Search was done in Medline database using the term "LA." Inclusion criteria included empirical original research articles investigating LA using qualitative, quantitative, or mixed methodologies. Articles were also required to be written in English, published in a scholarly peer-reviewed journal and have a dedicated section for methods and results. A Medline search resulted in only six articles fulfilling the inclusion criteria for this review. Most of the studies collected data about learners from learning management systems or online learning resources. Analysis used mostly quantitative methods including descriptive statistics, correlation tests, and regression models in two studies. Patterns of online behavior and usage of the digital resources as well as predicting achievement was the outcome most studies investigated. Research about LA in the field of medical education is still in infancy, with more questions than answers. The early studies are encouraging and showed that patterns of online learning can be easily revealed as well as predicting students' performance.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Wei; Lei, Wei-Hua; Wang, Ding-Xiong, E-mail: leiwh@hust.edu.cn

    Recently, two empirical correlations related to the minimum variability timescale (MTS) of the light curves are discovered in gamma-ray bursts (GRBs). One is the anti-correlation between MTS and Lorentz factor Γ, and the other is the anti-correlation between the MTS and gamma-ray luminosity L {sub γ}. Both of the two correlations might be used to explore the activity of the central engine of GRBs. In this paper, we try to understand these empirical correlations by combining two popular black hole central engine models (namely, the Blandford and Znajek mechanism (BZ) and the neutrino-dominated accretion flow (NDAF)). By taking the MTSmore » as the timescale of viscous instability of the NDAF, we find that these correlations favor the scenario in which the jet is driven by the BZ mechanism.« less

  9. Imaging the Material Properties of Bone Specimens using Reflection-Based Infrared Microspectroscopy

    PubMed Central

    Acerbo, Alvin S.; Carr, G. Lawrence; Judex, Stefan; Miller, Lisa M.

    2012-01-01

    Fourier Transform InfraRed Microspectroscopy (FTIRM) is a widely used method for mapping the material properties of bone and other mineralized tissues, including mineralization, crystallinity, carbonate substitution, and collagen cross-linking. This technique is traditionally performed in a transmission-based geometry, which requires the preparation of plastic-embedded thin sections, limiting its functionality. Here, we theoretically and empirically demonstrate the development of reflection-based FTIRM as an alternative to the widely adopted transmission-based FTIRM, which reduces specimen preparation time and broadens the range of specimens that can be imaged. In this study, mature mouse femurs were plastic-embedded and longitudinal sections were cut at a thickness of 4 μm for transmission-based FTIRM measurements. The remaining bone blocks were polished for specular reflectance-based FTIRM measurements on regions immediately adjacent to the transmission sections. Kramers-Kronig analysis of the reflectance data yielded the dielectric response from which the absorption coefficients were directly determined. The reflectance-derived absorbance was validated empirically using the transmission spectra from the thin sections. The spectral assignments for mineralization, carbonate substitution, and collagen cross-linking were indistinguishable in transmission and reflection geometries, while the stoichiometric/non-stoichiometric apatite crystallinity parameter shifted from 1032 / 1021 cm−1 in transmission-based to 1035 / 1025 cm−1 in reflection-based data. This theoretical demonstration and empirical validation of reflection-based FTIRM eliminates the need for thin sections of bone and more readily facilitates direct correlations with other methods such nanoindentation and quantitative backscatter electron imaging (qBSE) from the same specimen. It provides a unique framework for correlating bone’s material and mechanical properties. PMID:22455306

  10. Pi2 detection using Empirical Mode Decomposition (EMD)

    NASA Astrophysics Data System (ADS)

    Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz

    2017-04-01

    Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.

  11. Statistical framework and noise sensitivity of the amplitude radial correlation contrast method.

    PubMed

    Kipervaser, Zeev Gideon; Pelled, Galit; Goelman, Gadi

    2007-09-01

    A statistical framework for the amplitude radial correlation contrast (RCC) method, which integrates a conventional pixel threshold approach with cluster-size statistics, is presented. The RCC method uses functional MRI (fMRI) data to group neighboring voxels in terms of their degree of temporal cross correlation and compares coherences in different brain states (e.g., stimulation OFF vs. ON). By defining the RCC correlation map as the difference between two RCC images, the map distribution of two OFF states is shown to be normal, enabling the definition of the pixel cutoff. The empirical cluster-size null distribution obtained after the application of the pixel cutoff is used to define a cluster-size cutoff that allows 5% false positives. Assuming that the fMRI signal equals the task-induced response plus noise, an analytical expression of amplitude-RCC dependency on noise is obtained and used to define the pixel threshold. In vivo and ex vivo data obtained during rat forepaw electric stimulation are used to fine-tune this threshold. Calculating the spatial coherences within in vivo and ex vivo images shows enhanced coherence in the in vivo data, but no dependency on the anesthesia method, magnetic field strength, or depth of anesthesia, strengthening the generality of the proposed cutoffs. Copyright (c) 2007 Wiley-Liss, Inc.

  12. Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos

    2015-05-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  13. Characterization and thermogravimetric analysis of lanthanide hexafluoroacetylacetone chelates

    DOE PAGES

    Shahbazi, Shayan; Stratz, S. Adam; Auxier, John D.; ...

    2016-08-30

    This work reports the thermodynamic characterizations of organometallic species as a vehicle for the rapid separation of volatile nuclear fission products via gas chromatography due to differences in adsorption enthalpy. Because adsorption and sublimation thermodynamics are linearly correlated, there is considerable motivation to determine sublimation enthalpies. A method of isothermal thermogravimetric analysis, TGA-MS and melting point analysis are employed on thirteen lanthanide 1,1,1,5,5,5-hexafluoroacetylacetone complexes to determine sublimation enthalpies. An empirical correlation is used to estimate adsorption enthalpies of lanthanide complexes on a quartz column from the sublimation data. Additionally, four chelates are characterized by SC-XRD, elemental analysis, FTIR and NMR.

  14. Surveying for "artifacts": the susceptibility of the OCB-performance evaluation relationship to common rater, item, and measurement context effects.

    PubMed

    Podsakoff, Nathan P; Whiting, Steven W; Welsh, David T; Mai, Ke Michael

    2013-09-01

    Despite the increased attention paid to biases attributable to common method variance (CMV) over the past 50 years, researchers have only recently begun to systematically examine the effect of specific sources of CMV in previously published empirical studies. Our study contributes to this research by examining the extent to which common rater, item, and measurement context characteristics bias the relationships between organizational citizenship behaviors and performance evaluations using a mixed-effects analytic technique. Results from 173 correlations reported in 81 empirical studies (N = 31,146) indicate that even after controlling for study-level factors, common rater and anchor point number similarity substantially biased the focal correlations. Indeed, these sources of CMV (a) led to estimates that were between 60% and 96% larger when comparing measures obtained from a common rater, versus different raters; (b) led to 39% larger estimates when a common source rated the scales using the same number, versus a different number, of anchor points; and (c) when taken together with other study-level predictors, accounted for over half of the between-study variance in the focal correlations. We discuss the implications for researchers and practitioners and provide recommendations for future research. PsycINFO Database Record (c) 2013 APA, all rights reserved

  15. A computational efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Dini, Paolo; Maughmer, Mark D.

    1990-01-01

    In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.

  16. Analyzing the Cross-Correlation Between Onshore and Offshore RMB Exchange Rates Based on Multifractal Detrended Cross-Correlation Analysis (MF-DCCA)

    NASA Astrophysics Data System (ADS)

    Xie, Chi; Zhou, Yingying; Wang, Gangjin; Yan, Xinguo

    We use the multifractal detrended cross-correlation analysis (MF-DCCA) method to explore the multifractal behavior of the cross-correlation between exchange rates of onshore RMB (CNY) and offshore RMB (CNH) against US dollar (USD). The empirical data are daily prices of CNY/USD and CNH/USD from May 1, 2012 to February 29, 2016. The results demonstrate that: (i) the cross-correlation between CNY/USD and CNH/USD is persistent and its fluctuation is smaller when the order of fluctuation function is negative than that when the order is positive; (ii) the multifractal behavior of the cross-correlation between CNY/USD and CNH/USD is significant during the sample period; (iii) the dynamic Hurst exponents obtained by the rolling windows analysis show that the cross-correlation is stable when the global economic situation is good and volatile in bad situation; and (iv) the non-normal distribution of original data has a greater effect on the multifractality of the cross-correlation between CNY/USD and CNH/USD than the temporary correlation.

  17. DMirNet: Inferring direct microRNA-mRNA association networks.

    PubMed

    Lee, Minsu; Lee, HyungJune

    2016-12-05

    MicroRNAs (miRNAs) play important regulatory roles in the wide range of biological processes by inducing target mRNA degradation or translational repression. Based on the correlation between expression profiles of a miRNA and its target mRNA, various computational methods have previously been proposed to identify miRNA-mRNA association networks by incorporating the matched miRNA and mRNA expression profiles. However, there remain three major issues to be resolved in the conventional computation approaches for inferring miRNA-mRNA association networks from expression profiles. 1) Inferred correlations from the observed expression profiles using conventional correlation-based methods include numerous erroneous links or over-estimated edge weight due to the transitive information flow among direct associations. 2) Due to the high-dimension-low-sample-size problem on the microarray dataset, it is difficult to obtain an accurate and reliable estimate of the empirical correlations between all pairs of expression profiles. 3) Because the previously proposed computational methods usually suffer from varying performance across different datasets, a more reliable model that guarantees optimal or suboptimal performance across different datasets is highly needed. In this paper, we present DMirNet, a new framework for identifying direct miRNA-mRNA association networks. To tackle the aforementioned issues, DMirNet incorporates 1) three direct correlation estimation methods (namely Corpcor, SPACE, Network deconvolution) to infer direct miRNA-mRNA association networks, 2) the bootstrapping method to fully utilize insufficient training expression profiles, and 3) a rank-based Ensemble aggregation to build a reliable and robust model across different datasets. Our empirical experiments on three datasets demonstrate the combinatorial effects of necessary components in DMirNet. Additional performance comparison experiments show that DMirNet outperforms the state-of-the-art Ensemble-based model [1] which has shown the best performance across the same three datasets, with a factor of up to 1.29. Further, we identify 43 putative novel multi-cancer-related miRNA-mRNA association relationships from an inferred Top 1000 direct miRNA-mRNA association network. We believe that DMirNet is a promising method to identify novel direct miRNA-mRNA relations and to elucidate the direct miRNA-mRNA association networks. Since DMirNet infers direct relationships from the observed data, DMirNet can contribute to reconstructing various direct regulatory pathways, including, but not limited to, the direct miRNA-mRNA association networks.

  18. Research on the factors of return on equity: empirical analysis in Chinese port industries from 2000-2008

    NASA Astrophysics Data System (ADS)

    Li, Wei

    2012-01-01

    Port industries are the basic industries in the national economy. The industries have become the most modernized departments in every country. The development of the port industry is not only advantageous to promote the optimizing arrangement of social resources, but also to promote the growth of foreign trade volume through enhancing the transportation functions. Return on equity (ROE) is a direct indicator related to the maximization of company's wealth. It makes up the shortcomings of earnings per share (EPS). The aim of this paper is to prove the correlation between ROE and other financial indicators by choosing the listed port companies as the research objectives and selecting the data of these companies from 2000 to 2008 as empirical sample data with statistical analysis of the chartered figure and coefficient. The detailed analysis method used in the paper is the combination of trend analysis, comparative analysis and the ratio of the factor analysis method. This paper analyzes and compares all these factors and draws the conclusions as follows: Firstly, ROE has a positive correlation with total assets turnover, main profit margin and fixed asset ratio, while has a negative correlation with assets liabilities ratio, total assets growth rate and DOL. Secondly, main profit margin has the greatest positive effect on ROE among all these factors. The second greatest factor is total assets turnover, which shows the operation capacity is also an important indicator after the profitability. Thirdly, assets liabilities ratio has the greatest negative effect on ROE among all these factors.

  19. Research on the factors of return on equity: empirical analysis in Chinese port industries from 2000-2008

    NASA Astrophysics Data System (ADS)

    Li, Wei

    2011-12-01

    Port industries are the basic industries in the national economy. The industries have become the most modernized departments in every country. The development of the port industry is not only advantageous to promote the optimizing arrangement of social resources, but also to promote the growth of foreign trade volume through enhancing the transportation functions. Return on equity (ROE) is a direct indicator related to the maximization of company's wealth. It makes up the shortcomings of earnings per share (EPS). The aim of this paper is to prove the correlation between ROE and other financial indicators by choosing the listed port companies as the research objectives and selecting the data of these companies from 2000 to 2008 as empirical sample data with statistical analysis of the chartered figure and coefficient. The detailed analysis method used in the paper is the combination of trend analysis, comparative analysis and the ratio of the factor analysis method. This paper analyzes and compares all these factors and draws the conclusions as follows: Firstly, ROE has a positive correlation with total assets turnover, main profit margin and fixed asset ratio, while has a negative correlation with assets liabilities ratio, total assets growth rate and DOL. Secondly, main profit margin has the greatest positive effect on ROE among all these factors. The second greatest factor is total assets turnover, which shows the operation capacity is also an important indicator after the profitability. Thirdly, assets liabilities ratio has the greatest negative effect on ROE among all these factors.

  20. Identification of AR(I)MA processes for modelling temporal correlations of GPS observations

    NASA Astrophysics Data System (ADS)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In many geodetic applications observations of the Global Positioning System (GPS) are routinely processed by means of the least-squares method. However, this algorithm delivers reliable estimates of unknown parameters und realistic accuracy measures only if both the functional and stochastic models are appropriately defined within GPS data processing. One deficiency of the stochastic model used in many GPS software products consists in neglecting temporal correlations of GPS observations. In practice the knowledge of the temporal stochastic behaviour of GPS observations can be improved by analysing time series of residuals resulting from the least-squares evaluation. This paper presents an approach based on the theory of autoregressive (integrated) moving average (AR(I)MA) processes to model temporal correlations of GPS observations using time series of observation residuals. A practicable integration of AR(I)MA models in GPS data processing requires the determination of the order parameters of AR(I)MA processes at first. In case of GPS, the identification of AR(I)MA processes could be affected by various factors impacting GPS positioning results, e.g. baseline length, multipath effects, observation weighting, or weather variations. The influences of these factors on AR(I)MA identification are empirically analysed based on a large amount of representative residual time series resulting from differential GPS post-processing using 1-Hz observation data collected within the permanent SAPOS® (Satellite Positioning Service of the German State Survey) network. Both short and long time series are modelled by means of AR(I)MA processes. The final order parameters are determined based on the whole residual database; the corresponding empirical distribution functions illustrate that multipath and weather variations seem to affect the identification of AR(I)MA processes much more significantly than baseline length and observation weighting. Additionally, the modelling results of temporal correlations using high-order AR(I)MA processes are compared with those by means of first order autoregressive (AR(1)) processes and empirically estimated autocorrelation functions.

  1. Why Psychology Cannot be an Empirical Science.

    PubMed

    Smedslund, Jan

    2016-06-01

    The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.

  2. The Impact of Variability of Selected Geological and Mining Parameters on the Value and Risks of Projects in the Hard Coal Mining Industry

    NASA Astrophysics Data System (ADS)

    Kopacz, Michał

    2017-09-01

    The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.

  3. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    PubMed

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  4. An empirical investigation on different methods of economic growth rate forecast and its behavior from fifteen countries across five continents

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.

  5. Power-Laws and Scaling in Finance: Empirical Evidence and Simple Models

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe

    We discuss several models that may explain the origin of power-law distributions and power-law correlations in financial time series. From an empirical point of view, the exponents describing the tails of the price increments distribution and the decay of the volatility correlations are rather robust and suggest universality. However, many of the models that appear naturally (for example, to account for the distribution of wealth) contain some multiplicative noise, which generically leads to non universal exponents. Recent progress in the empirical study of the volatility suggests that the volatility results from some sort of multiplicative cascade. A convincing `microscopic' (i.e. trader based) model that explains this observation is however not yet available. We discuss a rather generic mechanism for long-ranged volatility correlations based on the idea that agents constantly switch between active and inactive strategies depending on their relative performance.

  6. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  7. Dependency structure and scaling properties of financial time series are related

    PubMed Central

    Morales, Raffaello; Di Matteo, T.; Aste, Tomaso

    2014-01-01

    We report evidence of a deep interplay between cross-correlations hierarchical properties and multifractality of New York Stock Exchange daily stock returns. The degree of multifractality displayed by different stocks is found to be positively correlated to their depth in the hierarchy of cross-correlations. We propose a dynamical model that reproduces this observation along with an array of other empirical properties. The structure of this model is such that the hierarchical structure of heterogeneous risks plays a crucial role in the time evolution of the correlation matrix, providing an interpretation to the mechanism behind the interplay between cross-correlation and multifractality in financial markets, where the degree of multifractality of stocks is associated to their hierarchical positioning in the cross-correlation structure. Empirical observations reported in this paper present a new perspective towards the merging of univariate multi scaling and multivariate cross-correlation properties of financial time series. PMID:24699417

  8. Dependency structure and scaling properties of financial time series are related

    NASA Astrophysics Data System (ADS)

    Morales, Raffaello; Di Matteo, T.; Aste, Tomaso

    2014-04-01

    We report evidence of a deep interplay between cross-correlations hierarchical properties and multifractality of New York Stock Exchange daily stock returns. The degree of multifractality displayed by different stocks is found to be positively correlated to their depth in the hierarchy of cross-correlations. We propose a dynamical model that reproduces this observation along with an array of other empirical properties. The structure of this model is such that the hierarchical structure of heterogeneous risks plays a crucial role in the time evolution of the correlation matrix, providing an interpretation to the mechanism behind the interplay between cross-correlation and multifractality in financial markets, where the degree of multifractality of stocks is associated to their hierarchical positioning in the cross-correlation structure. Empirical observations reported in this paper present a new perspective towards the merging of univariate multi scaling and multivariate cross-correlation properties of financial time series.

  9. A structural-phenomenological typology of mind-matter correlations.

    PubMed

    Atmanspacher, Harald; Fach, Wolfgang

    2013-04-01

    We present a typology of mind-matter correlations embedded in a dual-aspect monist framework as proposed by Pauli and Jung. They conjectured a picture in which the mental and the material arise as two complementary aspects of one underlying psychophysically neutral reality to which they cannot be reduced and to which direct empirical access is impossible. This picture suggests structural, persistent, reproducible mind-matter correlations by splitting the underlying reality into aspects. In addition, it suggests induced, occasional, evasive mind-matter correlations above and below, respectively, those stable baseline correlations. Two significant roles for the concept of meaning in this framework are elucidated. Finally, it is shown that the obtained typology is in perfect agreement with an empirically based classification of the phenomenology of mind-matter correlations as observed in exceptional human experiences. © 2013, The Society of Analytical Psychology.

  10. Lithology-derived structure classification from the joint interpretation of magnetotelluric and seismic models

    USGS Publications Warehouse

    Bedrosian, P.A.; Maercklin, N.; Weckmann, U.; Bartov, Y.; Ryberg, T.; Ritter, O.

    2007-01-01

    Magnetotelluric and seismic methods provide complementary information about the resistivity and velocity structure of the subsurface on similar scales and resolutions. No global relation, however, exists between these parameters, and correlations are often valid for only a limited target area. Independently derived inverse models from these methods can be combined using a classification approach to map geologic structure. The method employed is based solely on the statistical correlation of physical properties in a joint parameter space and is independent of theoretical or empirical relations linking electrical and seismic parameters. Regions of high correlation (classes) between resistivity and velocity can in turn be mapped back and re-examined in depth section. The spatial distribution of these classes, and the boundaries between them, provide structural information not evident in the individual models. This method is applied to a 10 km long profile crossing the Dead Sea Transform in Jordan. Several prominent classes are identified with specific lithologies in accordance with local geology. An abrupt change in lithology across the fault, together with vertical uplift of the basement suggest the fault is sub-vertical within the upper crust. ?? 2007 The Authors Journal compilation ?? 2007 RAS.

  11. A systematic review of the association between family meals and adolescent risk outcomes.

    PubMed

    Goldfarb, Samantha S; Tarver, Will L; Locher, Julie L; Preskitt, Julie; Sen, Bisakha

    2015-10-01

    To conduct a systematic review of the literature examining the relationship between family meals and adolescent health risk outcomes. We performed a systematic search of original empirical studies published between January 1990 and September 2013. Based on data from selected studies, we conducted logistic regression models to examine the correlates of reporting a protective association between frequent family meals and adolescent outcomes. Of the 254 analyses from 26 selected studies, most reported a significant association between family meals and the adolescent risk outcome-of-interest. However, model analyses which controlled for family connectedness variables, or used advanced empirical methods to account for family-level confounders, were less likely than unadjusted models to report significant relationships. The type of analysis conducted was significantly associated with the likelihood of finding a protective relationship between family meals and the adolescent outcome-of-interest, yet very few studies are using such methods in the literature. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  12. Extreme values in the Chinese and American stock markets based on detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Zhang, Minjia

    2015-10-01

    This paper focuses on the comparative analysis of extreme values in the Chinese and American stock markets based on the detrended fluctuation analysis (DFA) algorithm using the daily data of Shanghai composite index and Dow Jones Industrial Average. The empirical results indicate that the multifractal detrended fluctuation analysis (MF-DFA) method is more objective than the traditional percentile method. The range of extreme value of Dow Jones Industrial Average is smaller than that of Shanghai composite index, and the extreme value of Dow Jones Industrial Average is more time clustering. The extreme value of the Chinese or American stock markets is concentrated in 2008, which is consistent with the financial crisis in 2008. Moreover, we investigate whether extreme events affect the cross-correlation between the Chinese and American stock markets using multifractal detrended cross-correlation analysis algorithm. The results show that extreme events have nothing to do with the cross-correlation between the Chinese and American stock markets.

  13. Dynamic characterization of a damaged beam using empirical mode decomposition and Hilbert spectrum method

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Chen; Poon, Chun-Wing

    2004-07-01

    Recently, the empirical mode decomposition (EMD) in combination with the Hilbert spectrum method has been proposed to identify the dynamic characteristics of linear structures. In this study, this EMD and Hilbert spectrum method is used to analyze the dynamic characteristics of a damaged reinforced concrete (RC) beam in the laboratory. The RC beam is 4m long with a cross section of 200mm X 250mm. The beam is sequentially subjected to a concentrated load of different magnitudes at the mid-span to produce different degrees of damage. An impact load is applied around the mid-span to excite the beam. Responses of the beam are recorded by four accelerometers. Results indicate that the EMD and Hilbert spectrum method can reveal the variation of the dynamic characteristics in the time domain. These results are also compared with those obtained using the Fourier analysis. In general, it is found that the two sets of results correlate quite well in terms of mode counts and frequency values. Some differences, however, can be seen in the damping values, which perhaps can be attributed to the linear assumption of the Fourier transform.

  14. Multiscale multifractal DCCA and complexity behaviors of return intervals for Potts price model

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Wang, Jun; Stanley, H. Eugene

    2018-02-01

    To investigate the characteristics of extreme events in financial markets and the corresponding return intervals among these events, we use a Potts dynamic system to construct a random financial time series model of the attitudes of market traders. We use multiscale multifractal detrended cross-correlation analysis (MM-DCCA) and Lempel-Ziv complexity (LZC) perform numerical research of the return intervals for two significant China's stock market indices and for the proposed model. The new MM-DCCA method is based on the Hurst surface and provides more interpretable cross-correlations of the dynamic mechanism between different return interval series. We scale the LZC method with different exponents to illustrate the complexity of return intervals in different scales. Empirical studies indicate that the proposed return intervals from the Potts system and the real stock market indices hold similar statistical properties.

  15. An empirical comparison of methods for analyzing correlated data from a discrete choice survey to elicit patient preference for colorectal cancer screening

    PubMed Central

    2012-01-01

    Background A discrete choice experiment (DCE) is a preference survey which asks participants to make a choice among product portfolios comparing the key product characteristics by performing several choice tasks. Analyzing DCE data needs to account for within-participant correlation because choices from the same participant are likely to be similar. In this study, we empirically compared some commonly-used statistical methods for analyzing DCE data while accounting for within-participant correlation based on a survey of patient preference for colorectal cancer (CRC) screening tests conducted in Hamilton, Ontario, Canada in 2002. Methods A two-stage DCE design was used to investigate the impact of six attributes on participants' preferences for CRC screening test and willingness to undertake the test. We compared six models for clustered binary outcomes (logistic and probit regressions using cluster-robust standard error (SE), random-effects and generalized estimating equation approaches) and three models for clustered nominal outcomes (multinomial logistic and probit regressions with cluster-robust SE and random-effects multinomial logistic model). We also fitted a bivariate probit model with cluster-robust SE treating the choices from two stages as two correlated binary outcomes. The rank of relative importance between attributes and the estimates of β coefficient within attributes were used to assess the model robustness. Results In total 468 participants with each completing 10 choices were analyzed. Similar results were reported for the rank of relative importance and β coefficients across models for stage-one data on evaluating participants' preferences for the test. The six attributes ranked from high to low as follows: cost, specificity, process, sensitivity, preparation and pain. However, the results differed across models for stage-two data on evaluating participants' willingness to undertake the tests. Little within-patient correlation (ICC ≈ 0) was found in stage-one data, but substantial within-patient correlation existed (ICC = 0.659) in stage-two data. Conclusions When small clustering effect presented in DCE data, results remained robust across statistical models. However, results varied when larger clustering effect presented. Therefore, it is important to assess the robustness of the estimates via sensitivity analysis using different models for analyzing clustered data from DCE studies. PMID:22348526

  16. Model construction of nursing service satisfaction in hospitalized tumor patients.

    PubMed

    Chen, Yongyi; Liu, Jingshi; Xiao, Shuiyuan; Liu, Xiangyu; Tang, Xinhui; Zhou, Yujuan

    2014-01-01

    This study aims to construct a satisfaction model on nursing service in hospitalized tumor patients. Using questionnaires, data about hospitalized tumor patients' expectation, quality perception and satisfaction of hospital nursing service were obtained. A satisfaction model of nursing service in hospitalized tumor patients was established through empirical study and by structural equation method. This model was suitable for tumor specialized hospital, with reliability and validity. Patient satisfaction was significantly affected by quality perception and patient expectation. Patient satisfaction and patient loyalty was also affected by disease pressure. Hospital brand was positively correlated with patient satisfaction and patient loyalty, negatively correlated with patient complaint. Patient satisfaction was positively correlated with patient loyalty, patient complaints, and quality perception, and negatively correlated with disease pressure and patient expectation. The satisfaction model on nursing service in hospitalized tumor patients fits well. By this model, the quality of hospital nursing care may be improved.

  17. Model construction of nursing service satisfaction in hospitalized tumor patients

    PubMed Central

    Chen, Yongyi; Liu, Jingshi; Xiao, Shuiyuan; Liu, Xiangyu; Tang, Xinhui; Zhou, Yujuan

    2014-01-01

    This study aims to construct a satisfaction model on nursing service in hospitalized tumor patients. Using questionnaires, data about hospitalized tumor patients’ expectation, quality perception and satisfaction of hospital nursing service were obtained. A satisfaction model of nursing service in hospitalized tumor patients was established through empirical study and by structural equation method. This model was suitable for tumor specialized hospital, with reliability and validity. Patient satisfaction was significantly affected by quality perception and patient expectation. Patient satisfaction and patient loyalty was also affected by disease pressure. Hospital brand was positively correlated with patient satisfaction and patient loyalty, negatively correlated with patient complaint. Patient satisfaction was positively correlated with patient loyalty, patient complaints, and quality perception, and negatively correlated with disease pressure and patient expectation. The satisfaction model on nursing service in hospitalized tumor patients fits well. By this model, the quality of hospital nursing care may be improved. PMID:25419410

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellen M. Rabenberg; Brian J. Jaques; Bulent H. Sencer

    The mechanical properties of AISI 304 stainless steel irradiated for over a decade in the Experimental Breeder Reactor (EBR-II) were measured using miniature mechanical testing methods. The shear punch method was used to evaluate the shear strengths of the neutron-irradiated steel and a correlation factor was empirically determined to predict its tensile strength. The strength of the stainless steel slightly decreased with increasing irradiation temperature, and significantly increased with increasing dose until it saturated above approximately 5 dpa. Ferromagnetic measurements were used to observe and deduce the effects of the stress-induced austenite to martensite transformation as a result of shearmore » punch testing.« less

  19. Predicting Reduction Rates of Energetic Nitroaromatic Compounds Using Calculated One-Electron Reduction Potentials

    DOE PAGES

    Salter-Blanc, Alexandra; Bylaska, Eric J.; Johnston, Hayley; ...

    2015-02-11

    The evaluation of new energetic nitroaromatic compounds (NACs) for use in green munitions formulations requires models that can predict their environmental fate. The susceptibility of energetic NACs to nitro reduction might be predicted from correlations between rate constants (k) for this reaction and one-electron reduction potentials (E1NAC) / 0.059 V, but the mechanistic implications of such correlations are inconsistent with evidence from other methods. To address this inconsistency, we have reevaluated existing kinetic data using a (non-linear) free-energy relationship (FER) based on the Marcus theory of outer-sphere electron transfer. For most reductants, the results are inconsistent with rate limitation bymore » an initial, outer-sphere electron transfer, suggesting that the strong correlation between k and E1NAC is justified only as an empirical model. This empirical correlation was used to calibrate a new quantitative structure-activity relationship (QSAR) using previously reported values of k for non-energetic NAC reduction by Fe(II) porphyrin and newly reported values of E1NAC determined using density functional theory at the B3LYP/6-311++G(2d,2p) level with the COSMO solvation model. The QSAR was then validated for energetic NACs using newly measured kinetic data for 2,4,6-trinitrotoluene (TNT), 2,4-dinitrotoluene (2,4-DNT), and 2,4-dinitroanisole (DNAN). The data show close agreement with the QSAR, supporting its applicability to energetic NACs.« less

  20. Statistical microeconomics and commodity prices: theory and empirical results.

    PubMed

    Baaquie, Belal E

    2016-01-13

    A review is made of the statistical generalization of microeconomics by Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is given by the unequal time correlation function and is modelled by the Feynman path integral based on an action functional. The correlation functions of the model are defined using the path integral. The existence of the action functional for commodity prices that was postulated to exist in Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)) has been empirically ascertained in Baaquie et al. (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). The model's action functionals for different commodities has been empirically determined and calibrated using the unequal time correlation functions of the market commodity prices using a perturbation expansion (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). Nine commodities drawn from the energy, metal and grain sectors are empirically studied and their auto-correlation for up to 300 days is described by the model to an accuracy of R(2)>0.90-using only six parameters. © 2015 The Author(s).

  1. Artifact removal from EEG data with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.

  2. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition

    PubMed Central

    Lv, Yong; Song, Gangbing

    2018-01-01

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal. PMID:29659510

  3. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition.

    PubMed

    Yuan, Rui; Lv, Yong; Song, Gangbing

    2018-04-16

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.

  4. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  5. More efficient parameter estimates for factor analysis of ordinal variables by ridge generalized least squares.

    PubMed

    Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying

    2017-11-01

    Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.

  6. DGCA: A comprehensive R package for Differential Gene Correlation Analysis.

    PubMed

    McKenzie, Andrew T; Katsyv, Igor; Song, Won-Min; Wang, Minghui; Zhang, Bin

    2016-11-15

    Dissecting the regulatory relationships between genes is a critical step towards building accurate predictive models of biological systems. A powerful approach towards this end is to systematically study the differences in correlation between gene pairs in more than one distinct condition. In this study we develop an R package, DGCA (for Differential Gene Correlation Analysis), which offers a suite of tools for computing and analyzing differential correlations between gene pairs across multiple conditions. To minimize parametric assumptions, DGCA computes empirical p-values via permutation testing. To understand differential correlations at a systems level, DGCA performs higher-order analyses such as measuring the average difference in correlation and multiscale clustering analysis of differential correlation networks. Through a simulation study, we show that the straightforward z-score based method that DGCA employs significantly outperforms the existing alternative methods for calculating differential correlation. Application of DGCA to the TCGA RNA-seq data in breast cancer not only identifies key changes in the regulatory relationships between TP53 and PTEN and their target genes in the presence of inactivating mutations, but also reveals an immune-related differential correlation module that is specific to triple negative breast cancer (TNBC). DGCA is an R package for systematically assessing the difference in gene-gene regulatory relationships under different conditions. This user-friendly, effective, and comprehensive software tool will greatly facilitate the application of differential correlation analysis in many biological studies and thus will help identification of novel signaling pathways, biomarkers, and targets in complex biological systems and diseases.

  7. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells.

    PubMed

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-02-24

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed.

  8. Empirical testing of two models for staging antidepressant treatment resistance.

    PubMed

    Petersen, Timothy; Papakostas, George I; Posternak, Michael A; Kant, Alexis; Guyker, Wendy M; Iosifescu, Dan V; Yeung, Albert S; Nierenberg, Andrew A; Fava, Maurizio

    2005-08-01

    An increasing amount of attention has been paid to treatment resistant depression. Although it is quite common to observe nonremission to not just one but consecutive antidepressant treatments during a major depressive episode, a relationship between the likelihood of achieving remission and one's degree of resistance is not clearly known at this time. This study was undertaken to empirically test 2 recent models for staging treatment resistance. Psychiatrists from 2 academic sites reviewed charts of patients on their caseloads. Clinical Global Impressions-Severity (CGI-S) and Clinical Global Impressions-Improvement (CGI-I) scales were used to measure severity of depression and response to treatment, and 2 treatment-resistant staging scores were classified for each patient using the Massachusetts General Hospital staging method (MGH-S) and the Thase and Rush staging method (TR-S). Out of the 115 patient records reviewed, 58 (49.6%) patients remitted at some point during treatment. There was a significant positive correlation between the 2 staging scores, and logistic regression results indicated that greater MGH-S scores, but not TR-S scores, predicted nonremission. This study suggests that the hierarchical manner in which the field has typically gauged levels of treatment resistance may not be strongly supported by empirical evidence. This study suggests that the MGH staging model may offer some advantages over the staging method by Thase and Rush, as it generates a continuous score that considers both number of trials and intensity/optimization of each trial.

  9. Empirical study of recent Chinese stock market

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Li, W.; Cai, X.; Wang, Qiuping A.

    2009-05-01

    We investigate the statistical properties of the empirical data taken from the Chinese stock market during the time period from January, 2006 to July, 2007. By using the methods of detrended fluctuation analysis (DFA) and calculating correlation coefficients, we acquire the evidence of strong correlations among different stock types, stock index, stock volume turnover, A share (B share) seat number, and GDP per capita. In addition, we study the behavior of “volatility”, which is now defined as the difference between the new account numbers for two consecutive days. It is shown that the empirical power-law of the number of aftershock events exceeding the selected threshold is analogous to the Omori law originally observed in geophysics. Furthermore, we find that the cumulative distributions of stock return, trade volume and trade number are all exponential-like, which does not belong to the universality class of such distributions found by Xavier Gabaix et al. [Xavier Gabaix, Parameswaran Gopikrishnan, Vasiliki Plerou, H. Eugene Stanley, Nature, 423 (2003)] for major western markets. Through the comparison, we draw a conclusion that regardless of developed stock markets or emerging ones, “cubic law of returns” is valid only in the long-term absolute return, and in the short-term one, the distributions are exponential-like. Specifically, the distributions of both trade volume and trade number display distinct decaying behaviors in two separate regimes. Lastly, the scaling behavior of the relation is analyzed between dispersion and the mean monthly trade value for each administrative area in China.

  10. Tensile and shear loading of four fcc high-entropy alloys: A first-principles study

    NASA Astrophysics Data System (ADS)

    Li, Xiaoqing; Schönecker, Stephan; Li, Wei; Varga, Lajos K.; Irving, Douglas L.; Vitos, Levente

    2018-03-01

    Ab initio density-functional calculations are used to investigate the response of four face-centered-cubic (fcc) high-entropy alloys (HEAs) to tensile and shear loading. The ideal tensile and shear strengths (ITS and ISS) of the HEAs are studied by employing first-principles alloy theory formulated within the exact muffin-tin orbital method in combination with the coherent-potential approximation. We benchmark the computational accuracy against literature data by studying the ITS under uniaxial [110] tensile loading and the ISS for the [11 2 ¯] (111 ) shear deformation of pure fcc Ni and Al. For the HEAs, we uncover the alloying effect on the ITS and ISS. Under shear loading, relaxation reduces the ISS by ˜50 % for all considered HEAs. We demonstrate that the dimensionless tensile and shear strengths are significantly overestimated by adopting two widely used empirical models in comparison with our ab initio calculations. In addition, our predicted relationship between the dimensionless shear strength and shear instability are in line with the modified Frenkel model. Using the computed ISS, we derive the half-width of the dislocation core for the present HEAs. Employing the ratio of ITS to ISS, we discuss the intrinsic ductility of HEAs and compare it with a common empirical criterion. We observe a strong linear correlation between the shear instability and the ratio of ITS to ISS, whereas a weak positive correlation is found in the case of the empirical criterion.

  11. The Problem of Empirical Redundancy of Constructs in Organizational Research: An Empirical Investigation

    ERIC Educational Resources Information Center

    Le, Huy; Schmidt, Frank L.; Harter, James K.; Lauver, Kristy J.

    2010-01-01

    Construct empirical redundancy may be a major problem in organizational research today. In this paper, we explain and empirically illustrate a method for investigating this potential problem. We applied the method to examine the empirical redundancy of job satisfaction (JS) and organizational commitment (OC), two well-established organizational…

  12. Effect of two sweating simulation methods on clothing evaporative resistance in a so-called isothermal condition.

    PubMed

    Lu, Yehu; Wang, Faming; Peng, Hui

    2016-07-01

    The effect of sweating simulation methods on clothing evaporative resistance was investigated in a so-called isothermal condition (T manikin  = T a  = T r ). Two sweating simulation methods, namely, the pre-wetted fabric "skin" (PW) and the water supplied sweating (WS), were applied to determine clothing evaporative resistance on a "Newton" thermal manikin. Results indicated that the clothing evaporative resistance determined by the WS method was significantly lower than that measured by the PW method. In addition, the evaporative resistances measured by the two methods were correlated and exhibited a linear relationship. Validation experiments demonstrated that the empirical regression equation showed highly acceptable estimations. The study contributes to improving the accuracy of measurements of clothing evaporative resistance by means of a sweating manikin.

  13. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  14. Estimating trends in the global mean temperature record

    NASA Astrophysics Data System (ADS)

    Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.

    2017-06-01

    Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the important characteristics of internal variability, can result in more accurate uncertainty statements about trends.

  15. Application of a net-based baseline correction scheme to strong-motion records of the 2011 Mw 9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Wang, Rongjiang; Zhang, Yong; Walter, Thomas R.

    2014-06-01

    The description of static displacements associated with earthquakes is traditionally achieved using GPS, EDM or InSAR data. In addition, displacement histories can be derived from strong-motion records, allowing an improvement of geodetic networks at a high sampling rate and a better physical understanding of earthquake processes. Strong-motion records require a correction procedure appropriate for baseline shifts that may be caused by rotational motion, tilting and other instrumental effects. Common methods use an empirical bilinear correction on the velocity seismograms integrated from the strong-motion records. In this study, we overcome the weaknesses of an empirically based bilinear baseline correction scheme by using a net-based criterion to select the timing parameters. This idea is based on the physical principle that low-frequency seismic waveforms at neighbouring stations are coherent if the interstation distance is much smaller than the distance to the seismic source. For a dense strong-motion network, it is plausible to select the timing parameters so that the correlation coefficient between the velocity seismograms of two neighbouring stations is maximized after the baseline correction. We applied this new concept to the KiK-Net and K-Net strong-motion data available for the 2011 Mw 9.0 Tohoku earthquake. We compared the derived coseismic static displacement with high-quality GPS data, and with the results obtained using empirical methods. The results show that the proposed net-based approach is feasible and more robust than the individual empirical approaches. The outliers caused by unknown problems in the measurement system can be easily detected and quantified.

  16. The Multidimensional Efficiency of Pension System: Definition and Measurement in Cross-Country Studies.

    PubMed

    Chybalski, Filip

    The existing literature on the efficiency of pension system, usually addresses the problem between the choice of different theoretical models, or concerns one or few empirical pension systems. In this paper quite different approach to the measurement of pension system efficiency is proposed. It is dedicated mainly to the cross-country studies of empirical pension systems, however it may be also employed to the analysis of a given pension system on the basis of time series. I identify four dimensions of pension system efficiency, referring to: GDP-distribution, adequacy of pension, influence on the labour market and administrative costs. Consequently, I propose four sets of static and one set of dynamic efficiency indicators. In the empirical part of the paper, I use Spearman's rank correlation coefficient and cluster analysis to verify the proposed method on statistical data covering 28 European countries in years 2007-2011. I prove that the method works and enables some comparisons as well as clustering of analyzed pension systems. The study delivers also some interesting empirical findings. The main goal of pension systems seems to become poverty alleviation, since the efficiency of ensuring protection against poverty, as well as the efficiency of reducing poverty, is very resistant to the efficiency of GDP-distribution. The opposite situation characterizes the efficiency of consumption smoothing-this is generally sensitive to the efficiency of GDP-distribution, and its dynamics are sensitive to the dynamics of GDP-distribution efficiency. The results of the study indicate the Norwegian and the Icelandic pension systems to be the most efficient in the analyzed group.

  17. Uncertainties in scaling factors for ab initio vibrational zero-point energies

    NASA Astrophysics Data System (ADS)

    Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger

    2009-03-01

    Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.

  18. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties

    PubMed Central

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2013-01-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K). PMID:25685493

  19. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties.

    PubMed

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2014-03-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).

  20. Importance of small-degree nodes in assortative networks with degree-weight correlations

    NASA Astrophysics Data System (ADS)

    Ma, Sijuan; Feng, Ling; Monterola, Christopher Pineda; Lai, Choy Heng

    2017-10-01

    It has been known that assortative network structure plays an important role in spreading dynamics for unweighted networks. Yet its influence on weighted networks is not clear, in particular when weight is strongly correlated with the degrees of the nodes as we empirically observed in Twitter. Here we use the self-consistent probability method and revised nonperturbative heterogenous mean-field theory method to investigate this influence on both susceptible-infective-recovered (SIR) and susceptible-infective-susceptible (SIS) spreading dynamics. Both our simulation and theoretical results show that while the critical threshold is not significantly influenced by the assortativity, the prevalence in the supercritical regime shows a crossover under different degree-weight correlations. In particular, unlike the case of random mixing networks, in assortative networks, the negative degree-weight correlation leads to higher prevalence in their spreading beyond the critical transmissivity than that of the positively correlated. In addition, the previously observed inhibition effect on spreading velocity by assortative structure is not apparent in negatively degree-weight correlated networks, while it is enhanced for that of the positively correlated. Detailed investigation into the degree distribution of the infected nodes reveals that small-degree nodes play essential roles in the supercritical phase of both SIR and SIS spreadings. Our results have direct implications in understanding viral information spreading over online social networks and epidemic spreading over contact networks.

  1. Empirical correlations of the performance of vapor-anode PX-series AMTEC cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, L.; Merrill, J.M.; Mayberry, C.

    Power systems based on AMTEC technology will be used for future NASA missions, including a Pluto-Express (PX) or Europa mission planned for approximately year 2004. AMTEC technology may also be used as an alternative to photovoltaic based power systems for future Air Force missions. An extensive development program of Alkali-Metal Thermal-to-Electric Conversion (AMTEC) technology has been underway at the Vehicle Technologies Branch of the Air Force Research Laboratory (AFRL) in Albuquerque, New Mexico since 1992. Under this program, numerical modeling and experimental investigations of the performance of the various multi-BASE tube, vapor-anode AMTEC cells have been and are being performed.more » Vacuum testing of AMTEC cells at AFRL determines the effects of changing the hot and cold end temperatures, T{sub hot} and T{sub cold}, and applied external load, R{sub ext}, on the cell electric power output, current-voltage characteristics, and conversion efficiency. Test results have traditionally been used to provide feedback to cell designers, and to validate numerical models. The current work utilizes the test data to develop empirical correlations for cell output performance under various working conditions. Because the empirical correlations are developed directly from the experimental data, uncertainties arising from material properties that must be used in numerical modeling can be avoided. Empirical correlations of recent vapor-anode PX-series AMTEC cells have been developed. Based on AMTEC theory and the experimental data, the cell output power (as well as voltage and current) was correlated as a function of three parameters (T{sub hot}, T{sub cold}, and R{sub ext}) for a given cell. Correlations were developed for different cells (PX-3C, PX-3A, PX-G3, and PX-5A), and were in good agreement with experimental data for these cells. Use of these correlations can greatly reduce the testing required to determine electrical performance of a given type of AMTEC cell over a wide range of operating conditions.« less

  2. Correlated Noise: How it Breaks NMF, and What to Do About It.

    PubMed

    Plis, Sergey M; Potluru, Vamsi K; Lane, Terran; Calhoun, Vince D

    2011-01-12

    Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data.

  3. Correlated Noise: How it Breaks NMF, and What to Do About It

    PubMed Central

    Plis, Sergey M.; Potluru, Vamsi K.; Lane, Terran; Calhoun, Vince D.

    2010-01-01

    Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data. PMID:23750288

  4. Spatio-temporal Reconstruction of Neural Sources Using Indirect Dominant Mode Rejection.

    PubMed

    Jafadideh, Alireza Talesh; Asl, Babak Mohammadzadeh

    2018-04-27

    Adaptive minimum variance based beamformers (MVB) have been successfully applied to magnetoencephalogram (MEG) and electroencephalogram (EEG) data to localize brain activities. However, the performance of these beamformers falls down in situations where correlated or interference sources exist. To overcome this problem, we propose indirect dominant mode rejection (iDMR) beamformer application in brain source localization. This method by modifying measurement covariance matrix makes MVB applicable in source localization in the presence of correlated and interference sources. Numerical results on both EEG and MEG data demonstrate that presented approach accurately reconstructs time courses of active sources and localizes those sources with high spatial resolution. In addition, the results of real AEF data show the good performance of iDMR in empirical situations. Hence, iDMR can be reliably used for brain source localization especially when there are correlated and interference sources.

  5. Aggregation of carbon dioxide sequestration storage assessment units

    USGS Publications Warehouse

    Blondes, Madalyn S.; Schuenemeyer, John H.; Olea, Ricardo A.; Drew, Lawrence J.

    2013-01-01

    The U.S. Geological Survey is currently conducting a national assessment of carbon dioxide (CO2) storage resources, mandated by the Energy Independence and Security Act of 2007. Pre-emission capture and storage of CO2 in subsurface saline formations is one potential method to reduce greenhouse gas emissions and the negative impact of global climate change. Like many large-scale resource assessments, the area under investigation is split into smaller, more manageable storage assessment units (SAUs), which must be aggregated with correctly propagated uncertainty to the basin, regional, and national scales. The aggregation methodology requires two types of data: marginal probability distributions of storage resource for each SAU, and a correlation matrix obtained by expert elicitation describing interdependencies between pairs of SAUs. Dependencies arise because geologic analogs, assessment methods, and assessors often overlap. The correlation matrix is used to induce rank correlation, using a Cholesky decomposition, among the empirical marginal distributions representing individually assessed SAUs. This manuscript presents a probabilistic aggregation method tailored to the correlations and dependencies inherent to a CO2 storage assessment. Aggregation results must be presented at the basin, regional, and national scales. A single stage approach, in which one large correlation matrix is defined and subsets are used for different scales, is compared to a multiple stage approach, in which new correlation matrices are created to aggregate intermediate results. Although the single-stage approach requires determination of significantly more correlation coefficients, it captures geologic dependencies among similar units in different basins and it is less sensitive to fluctuations in low correlation coefficients than the multiple stage approach. Thus, subsets of one single-stage correlation matrix are used to aggregate to basin, regional, and national scales.

  6. Local Linear Regression for Data with AR Errors.

    PubMed

    Li, Runze; Li, Yan

    2009-07-01

    In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.

  7. Absolute Measurement of the Refractive Index of Water by a Mode-Locked Laser at 518 nm.

    PubMed

    Meng, Zhaopeng; Zhai, Xiaoyu; Wei, Jianguo; Wang, Zhiyang; Wu, Hanzhong

    2018-04-09

    In this paper, we demonstrate a method using a frequency comb, which can precisely measure the refractive index of water. We have developed a simple system, in which a Michelson interferometer is placed into a quartz-glass container with a low expansion coefficient, and for which compensation of the thermal expansion of the water container is not required. By scanning a mirror on a moving stage, a pair of cross-correlation patterns can be generated. We can obtain the length information via these cross-correlation patterns, with or without water in the container. The refractive index of water can be measured by the resulting lengths. Long-term experimental results show that our method can measure the refractive index of water with a high degree of accuracy-measurement uncertainty at 10 -5 level has been achieved, compared with the values calculated by the empirical formula.

  8. Detection of Upscale-Crop and Partial Manipulation in Surveillance Video Based on Sensor Pattern Noise

    PubMed Central

    Hyun, Dai-Kyung; Ryu, Seung-Jin; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-01-01

    In many court cases, surveillance videos are used as significant court evidence. As these surveillance videos can easily be forged, it may cause serious social issues, such as convicting an innocent person. Nevertheless, there is little research being done on forgery of surveillance videos. This paper proposes a forensic technique to detect forgeries of surveillance video based on sensor pattern noise (SPN). We exploit the scaling invariance of the minimum average correlation energy Mellin radial harmonic (MACE-MRH) correlation filter to reliably unveil traces of upscaling in videos. By excluding the high-frequency components of the investigated video and adaptively choosing the size of the local search window, the proposed method effectively localizes partially manipulated regions. Empirical evidence from a large database of test videos, including RGB (Red, Green, Blue)/infrared video, dynamic-/static-scene video and compressed video, indicates the superior performance of the proposed method. PMID:24051524

  9. Absolute Measurement of the Refractive Index of Water by a Mode-Locked Laser at 518 nm

    PubMed Central

    Meng, Zhaopeng; Zhai, Xiaoyu; Wei, Jianguo; Wang, Zhiyang; Wu, Hanzhong

    2018-01-01

    In this paper, we demonstrate a method using a frequency comb, which can precisely measure the refractive index of water. We have developed a simple system, in which a Michelson interferometer is placed into a quartz-glass container with a low expansion coefficient, and for which compensation of the thermal expansion of the water container is not required. By scanning a mirror on a moving stage, a pair of cross-correlation patterns can be generated. We can obtain the length information via these cross-correlation patterns, with or without water in the container. The refractive index of water can be measured by the resulting lengths. Long-term experimental results show that our method can measure the refractive index of water with a high degree of accuracy—measurement uncertainty at 10−5 level has been achieved, compared with the values calculated by the empirical formula. PMID:29642518

  10. Prediction of the production of nitrogen oxide (NOx) in turbojet engines

    NASA Astrophysics Data System (ADS)

    Tsague, Louis; Tsogo, Joseph; Tatietse, Thomas Tamo

    Gaseous nitrogen oxides (NO+NO2=NOx) are known as atmospheric trace constituent. These gases remain a big concern despite the advances in low NOx emission technology because they play a critical role in regulating the oxidization capacity of the atmosphere according to Crutzen [1995. My life with O 3, NO x and other YZO x S; Nobel Lecture; Chemistry 1995; pp 195; December 8, 1995] . Aircraft emissions of nitrogen oxides ( NOx) are regulated by the International Civil Aviation Organization. The prediction of NOx emission in turbojet engines by combining combustion operational data produced information showing correlation between the analytical and empirical results. There is close similarity between the calculated emission index and experimental data. The correlation shows improved accuracy when the 2124 experimental data from 11 gas turbine engines are evaluated than a previous semi empirical correlation approach proposed by Pearce et al. [1993. The prediction of thermal NOx in gas turbine exhausts. Eleventh International Symposium on Air Breathing Engines, Tokyo, 1993, pp. 6-9]. The new method we propose predict the production of NOx with far more improved accuracy than previous methods. Since a turbojet engine works in an atmosphere where temperature, pressure and humidity change frequently, a correction factor is developed with standard atmospheric laws and some correlations taken from scientific literature [Swartwelder, M., 2000. Aerospace engineering 410 Term Project performance analysis, November 17, 2000, pp. 2-5; Reed, J.A. Java Gas Turbine Simulator Documentation. pp. 4-5]. The new correction factor is validated with experimental observations from 19 turbojet engines cruising at altitudes of 9 and 13 km given in the ICAO repertory [Middleton, D., 1992. Appendix K (FAA/SETA). Section 1: Boeing Method Two Indices, 1992, pp. 2-3]. This correction factor will enable the prediction of cruise NOx emissions of turbojet engines at cruising speeds. The ICAO database [Goehlich, R.A., 2000. Investigation into the applicability of pollutant emission models for computer aided preliminary aircraft design, Book number 175654, 4.2.2000, pp. 57-79] can now be completed using the approach we propose to complete the whole mission flight NOx emissions.

  11. Adaptive Correlation Space Adjusted Open-Loop Tracking Approach for Vehicle Positioning with Global Navigation Satellite System in Urban Areas

    PubMed Central

    Ruan, Hang; Li, Jian; Zhang, Lei; Long, Teng

    2015-01-01

    For vehicle positioning with Global Navigation Satellite System (GNSS) in urban areas, open-loop tracking shows better performance because of its high sensitivity and superior robustness against multipath. However, no previous study has focused on the effects of the code search grid size on the code phase measurement accuracy of open-loop tracking. Traditional open-loop tracking methods are performed by the batch correlators with fixed correlation space. The code search grid size, which is the correlation space, is a constant empirical value and the code phase measuring accuracy will be largely degraded due to the improper grid size, especially when the signal carrier-to-noise density ratio (C/N0) varies. In this study, the Adaptive Correlation Space Adjusted Open-Loop Tracking Approach (ACSA-OLTA) is proposed to improve the code phase measurement dependent pseudo range accuracy. In ACSA-OLTA, the correlation space is adjusted according to the signal C/N0. The novel Equivalent Weighted Pseudo Range Error (EWPRE) is raised to obtain the optimal code search grid sizes for different C/N0. The code phase measuring errors of different measurement calculation methods are analyzed for the first time. The measurement calculation strategy of ACSA-OLTA is derived from the analysis to further improve the accuracy but reduce the correlator consumption. Performance simulation and real tests confirm that the pseudo range and positioning accuracy of ASCA-OLTA are better than the traditional open-loop tracking methods in the usual scenarios of urban area. PMID:26343683

  12. Neural activity in relation to empirically derived personality syndromes in depression using a psychodynamic fMRI paradigm

    PubMed Central

    Taubner, Svenja; Wiswede, Daniel; Kessler, Henrik

    2013-01-01

    Objective: The heterogeneity between patients with depression cannot be captured adequately with existing descriptive systems of diagnosis and neurobiological models of depression. Furthermore, considering the highly individual nature of depression, the application of general stimuli in past research efforts may not capture the essence of the disorder. This study aims to identify subtypes of depression by using empirically derived personality syndromes, and to explore neural correlates of the derived personality syndromes. Materials and Methods: In the present exploratory study, an individually tailored and psychodynamically based functional magnetic resonance imaging paradigm using dysfunctional relationship patterns was presented to 20 chronically depressed patients. Results from the Shedler–Westen Assessment Procedure (SWAP-200) were analyzed by Q-factor analysis to identify clinically relevant subgroups of depression and related brain activation. Results: The principle component analysis of SWAP-200 items from all 20 patients lead to a two-factor solution: “Depressive Personality” and “Emotional-Hostile-Externalizing Personality.” Both factors were used in a whole-brain correlational analysis but only the second factor yielded significant positive correlations in four regions: a large cluster in the right orbitofrontal cortex (OFC), the left ventral striatum, a small cluster in the left temporal pole, and another small cluster in the right middle frontal gyrus. Discussion: The degree to which patients with depression score high on the factor “Emotional-Hostile-Externalizing Personality” correlated with relatively higher activity in three key areas involved in emotion processing, evaluation of reward/punishment, negative cognitions, depressive pathology, and social knowledge (OFC, ventral striatum, temporal pole). Results may contribute to an alternative description of neural correlates of depression showing differential brain activation dependent on the extent of specific personality syndromes in depression. PMID:24363644

  13. A quantitative estimate of schema abnormality in socially anxious and non-anxious individuals.

    PubMed

    Wenzel, Amy; Brendle, Jennifer R; Kerr, Patrick L; Purath, Donna; Ferraro, F Richard

    2007-01-01

    Although cognitive theories of anxiety suggest that anxious individuals are characterized by abnormal threat-relevant schemas, few empirical studies have estimated the nature of these cognitive structures using quantitative methods that lend themselves to inferential statistical analysis. In the present study, socially anxious (n = 55) and non-anxious (n = 62) participants completed 3 Q-Sort tasks to assess their knowledge of events that commonly occur in social or evaluative scenarios. Participants either sorted events according to how commonly they personally believe the events occur (i.e. "self" condition), or to how commonly they estimate that most people believe they occur (i.e. "other" condition). Participants' individual Q-Sorts were correlated with mean sorts obtained from a normative sample to obtain an estimate of schema abnormality, with lower correlations representing greater levels of abnormality. Relative to non-anxious participants, socially anxious participants' sorts were less strongly associated with sorts of the normative sample, particularly in the "self" condition, although secondary analyses suggest that some significant results might be explained, in part, by depression and experience with the scenarios. These results provide empirical support for the theoretical notion that threat-relevant self-schemas of anxious individuals are characterized by some degree of abnormality.

  14. Analytical determination of propeller performance degradation due to ice accretion

    NASA Technical Reports Server (NTRS)

    Miller, T. L.

    1986-01-01

    A computer code has been developed which is capable of computing propeller performance for clean, glaze, or rime iced propeller configurations, thereby providing a mechanism for determining the degree of performance degradation which results from a given icing encounter. The inviscid, incompressible flow field at each specified propeller radial location is first computed using the Theodorsen transformation method of conformal mapping. A droplet trajectory computation then calculates droplet impingement points and airfoil collection efficiency for each radial location, at which point several user-selectable empirical correlations are available for determining the aerodynamic penalities which arise due to the ice accretion. Propeller performance is finally computed using strip analysis for either the clean or iced propeller. In the iced mode, the differential thrust and torque coefficient equations are modified by the drag and lift coefficient increments due to ice to obtain the appropriate iced values. Comparison with available experimental propeller icing data shows good agreement in several cases. The code's capability to properly predict iced thrust coefficient, power coefficient, and propeller efficiency is shown to be dependent on the choice of empirical correlation employed as well as proper specification of radial icing extent.

  15. Agent-Based Model with Asymmetric Trading and Herding for Complex Financial Systems

    PubMed Central

    Chen, Jun-Jie; Zheng, Bo; Tan, Lei

    2013-01-01

    Background For complex financial systems, the negative and positive return-volatility correlations, i.e., the so-called leverage and anti-leverage effects, are particularly important for the understanding of the price dynamics. However, the microscopic origination of the leverage and anti-leverage effects is still not understood, and how to produce these effects in agent-based modeling remains open. On the other hand, in constructing microscopic models, it is a promising conception to determine model parameters from empirical data rather than from statistical fitting of the results. Methods To study the microscopic origination of the return-volatility correlation in financial systems, we take into account the individual and collective behaviors of investors in real markets, and construct an agent-based model. The agents are linked with each other and trade in groups, and particularly, two novel microscopic mechanisms, i.e., investors’ asymmetric trading and herding in bull and bear markets, are introduced. Further, we propose effective methods to determine the key parameters in our model from historical market data. Results With the model parameters determined for six representative stock-market indices in the world, respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. At the same time, our model produces other features of the real markets, such as the fat-tail distribution of returns and the long-term correlation of volatilities. Conclusions We reveal that for the leverage and anti-leverage effects, both the investors’ asymmetric trading and herding are essential generation mechanisms. Among the six markets, however, the investors’ trading is approximately symmetric for the five markets which exhibit the leverage effect, thus contributing very little. These two microscopic mechanisms and the methods for the determination of the key parameters can be applied to other complex systems with similar asymmetries. PMID:24278146

  16. Statistical analysis on multifractal detrended cross-correlation coefficient for return interval by oriented percolation

    NASA Astrophysics Data System (ADS)

    Deng, Wei; Wang, Jun

    2015-06-01

    We investigate and quantify the multifractal detrended cross-correlation of return interval series for Chinese stock markets and a proposed price model, the price model is established by oriented percolation. The return interval describes the waiting time between two successive price volatilities which are above some threshold, the present work is an attempt to quantify the level of multifractal detrended cross-correlation for the return intervals. Further, the concept of MF-DCCA coefficient of return intervals is introduced, and the corresponding empirical research is performed. The empirical results show that the return intervals of SSE and SZSE are weakly positive multifractal power-law cross-correlated, and exhibit the fluctuation patterns of MF-DCCA coefficients. The similar behaviors of return intervals for the price model is also demonstrated.

  17. Cross-correlations between the US monetary policy, US dollar index and crude oil market

    NASA Astrophysics Data System (ADS)

    Sun, Xinxin; Lu, Xinsheng; Yue, Gongzheng; Li, Jianfeng

    2017-02-01

    This paper investigates the cross-correlations between the US monetary policy, US dollar index and WTI crude oil market, using a dataset covering a period from February 4, 1994 to February 29, 2016. Our study contributes to the literature by examining the effect of the US monetary policy on US dollar index and WTI crude oil through the MF-DCCA approach. The empirical results show that the cross-correlations between the three sets of time series exhibit strong multifractal features with the strength of multifractality increasing over the sample period. Employing a rolling window analysis, our empirical results show that the US monetary policy operations have clear influences on the cross-correlated behavior of the three time series covered by this study.

  18. Mental Task Classification Scheme Utilizing Correlation Coefficient Extracted from Interchannel Intrinsic Mode Function.

    PubMed

    Rahman, Md Mostafizur; Fattah, Shaikh Anowarul

    2017-01-01

    In view of recent increase of brain computer interface (BCI) based applications, the importance of efficient classification of various mental tasks has increased prodigiously nowadays. In order to obtain effective classification, efficient feature extraction scheme is necessary, for which, in the proposed method, the interchannel relationship among electroencephalogram (EEG) data is utilized. It is expected that the correlation obtained from different combination of channels will be different for different mental tasks, which can be exploited to extract distinctive feature. The empirical mode decomposition (EMD) technique is employed on a test EEG signal obtained from a channel, which provides a number of intrinsic mode functions (IMFs), and correlation coefficient is extracted from interchannel IMF data. Simultaneously, different statistical features are also obtained from each IMF. Finally, the feature matrix is formed utilizing interchannel correlation features and intrachannel statistical features of the selected IMFs of EEG signal. Different kernels of the support vector machine (SVM) classifier are used to carry out the classification task. An EEG dataset containing ten different combinations of five different mental tasks is utilized to demonstrate the classification performance and a very high level of accuracy is achieved by the proposed scheme compared to existing methods.

  19. Defects diagnosis in laser brazing using near-infrared signals based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Cheng, Liyong; Mi, Gaoyang; Li, Shuo; Wang, Chunming; Hu, Xiyuan

    2018-03-01

    Real-time monitoring of laser welding plays a very important role in the modern automated production and online defects diagnosis is necessary to be implemented. In this study, the status of laser brazing was monitored in real time using an infrared photoelectric sensor. Four kinds of braze seams (including healthy weld, unfilled weld, hole weld and rough surface weld) along with corresponding near-infrared signals were obtained. Further, a new method called Empirical Mode Decomposition (EMD) was proposed to analyze the near-infrared signals. The results showed that the EMD method had a good performance in eliminating the noise on the near-infrared signals. And then, the correlation coefficient was developed for selecting the Intrinsic Mode Function (IMF) more sensitive to the weld defects. A more accurate signal was reconstructed with the selected IMF components. Simultaneously, the spectrum of selected IMF components was solved using fast Fourier transform, and the frequency characteristics were clearly revealed. The frequency energy of different frequency bands was computed to diagnose the defects. There was a significant difference in four types of weld defects. This approach has been proved to be an effective and efficient method for monitoring laser brazing defects.

  20. On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2017-03-15

    The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Movement patterns of Tenebrio beetles demonstrate empirically that correlated-random-walks have similitude with a Lévy walk.

    PubMed

    Reynolds, Andy M; Leprêtre, Lisa; Bohan, David A

    2013-11-07

    Correlated random walks are the dominant conceptual framework for modelling and interpreting organism movement patterns. Recent years have witnessed a stream of high profile publications reporting that many organisms perform Lévy walks; movement patterns that seemingly stand apart from the correlated random walk paradigm because they are discrete and scale-free rather than continuous and scale-finite. Our new study of the movement patterns of Tenebrio molitor beetles in unchanging, featureless arenas provides the first empirical support for a remarkable and deep theoretical synthesis that unites correlated random walks and Lévy walks. It demonstrates that the two models are complementary rather than competing descriptions of movement pattern data and shows that correlated random walks are a part of the Lévy walk family. It follows from this that vast numbers of Lévy walkers could be hiding in plain sight.

  2. Asymmetric multiscale detrended fluctuation analysis of California electricity spot price

    NASA Astrophysics Data System (ADS)

    Fan, Qingju

    2016-01-01

    In this paper, we develop a new method called asymmetric multiscale detrended fluctuation analysis, which is an extension of asymmetric detrended fluctuation analysis (A-DFA) and can assess the asymmetry correlation properties of series with a variable scale range. We investigate the asymmetric correlations in California 1999-2000 power market after filtering some periodic trends by empirical mode decomposition (EMD). Our findings show the coexistence of symmetric and asymmetric correlations in the price series of 1999 and strong asymmetric correlations in 2000. What is more, we detect subtle correlation properties of the upward and downward price series for most larger scale intervals in 2000. Meanwhile, the fluctuations of Δα(s) (asymmetry) and | Δα(s) | (absolute asymmetry) are more significant in 2000 than that in 1999 for larger scale intervals, and they have similar characteristics for smaller scale intervals. We conclude that the strong asymmetry property and different correlation properties of upward and downward price series for larger scale intervals in 2000 have important implications on the collapse of California power market, and our findings shed a new light on the underlying mechanisms of power price.

  3. Evaluating the utility of two gestural discomfort evaluation methods

    PubMed Central

    Son, Minseok; Jung, Jaemoon; Park, Woojin

    2017-01-01

    Evaluating physical discomfort of designed gestures is important for creating safe and usable gesture-based interaction systems; yet, gestural discomfort evaluation has not been extensively studied in HCI, and few evaluation methods seem currently available whose utility has been experimentally confirmed. To address this, this study empirically demonstrated the utility of the subjective rating method after a small number of gesture repetitions (a maximum of four repetitions) in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. The subjective rating method has been widely used in previous gesture studies but without empirical evidence on its utility. This study also proposed a gesture discomfort evaluation method based on an existing ergonomics posture evaluation tool (Rapid Upper Limb Assessment) and demonstrated its utility in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. Rapid Upper Limb Assessment is an ergonomics postural analysis tool that quantifies the work-related musculoskeletal disorders risks for manual tasks, and has been hypothesized to be capable of correctly determining discomfort resulting from prolonged, repetitive gesture use. The two methods were evaluated through comparisons against a baseline method involving discomfort rating after actual prolonged, repetitive gesture use. Correlation analyses indicated that both methods were in good agreement with the baseline. The methods proposed in this study seem useful for predicting discomfort resulting from prolonged, repetitive gesture use, and are expected to help interaction designers create safe and usable gesture-based interaction systems. PMID:28423016

  4. Empirical correlates for the Minnesota Multiphasic Personality Inventory-2-Restructured Form in a German inpatient sample.

    PubMed

    Moultrie, Josefine K; Engel, Rolf R

    2017-10-01

    We identified empirical correlates for the 42 substantive scales of the German language version of the Minnesota Multiphasic Personality Inventory (MMPI)-2-Restructured Form (MMPI-2-RF): Higher Order, Restructured Clinical, Specific Problem, Interest, and revised Personality Psychopathology Five scales. We collected external validity data by means of a 177-item chart review form in a sample of 488 psychiatric inpatients of a German university hospital. We structured our findings along the interpretational guidelines for the MMPI-2-RF and compared them with the validity data published in the tables of the MMPI-2-RF Technical Manual. Our results show significant correlations between MMPI-2-RF scales and conceptually relevant criteria. Most of the results were in line with U.S. validation studies. Some of the differences could be attributed to sample compositions. For most of the scales, construct validity coefficients were acceptable. Taken together, this study amplifies the enlarging body of research on empirical correlates of the MMPI-2-RF scales in a new sample. The study suggests that the interpretations given in the MMPI-2-RF manual may be generalizable to the German language MMPI-2-RF. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Helicopter rotor and engine sizing for preliminary performance estimation

    NASA Technical Reports Server (NTRS)

    Talbot, P. D.; Bowles, J. V.; Lee, H. C.

    1986-01-01

    Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.

  6. Investigation of the turbulent wind field below 500 feet altitude at the Eastern Test Range, Florida

    NASA Technical Reports Server (NTRS)

    Blackadar, A. K.; Panofsky, H. A.; Fiedler, F.

    1974-01-01

    A detailed analysis of wind profiles and turbulence at the 150 m Cape Kennedy Meteorological Tower is presented. Various methods are explored for the estimation of wind profiles, wind variances, high-frequency spectra, and coherences between various levels, given roughness length and either low-level wind and temperature data, or geostrophic wind and insolation. The relationship between planetary Richardson number, insolation, and geostrophic wind is explored empirically. Techniques were devised which resulted in surface stresses reasonably well correlated with the surface stresses obtained from low-level data. Finally, practical methods are suggested for the estimation of wind profiles and wind statistics.

  7. Filtration of human EEG recordings from physiological artifacts with empirical mode method

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Khramova, Marina V.

    2017-03-01

    In the paper we propose the new method for dealing with noise and physiological artifacts in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We consider noises and physiological artifacts on EEG as specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from eye-moving artifacts and show high efficiency of the method.

  8. Classical density functional theory and the phase-field crystal method using a rational function to describe the two-body direct correlation function.

    PubMed

    Pisutha-Arnond, N; Chan, V W L; Iyer, M; Gavini, V; Thornton, K

    2013-01-01

    We introduce a new approach to represent a two-body direct correlation function (DCF) in order to alleviate the computational demand of classical density functional theory (CDFT) and enhance the predictive capability of the phase-field crystal (PFC) method. The approach utilizes a rational function fit (RFF) to approximate the two-body DCF in Fourier space. We use the RFF to show that short-wavelength contributions of the two-body DCF play an important role in determining the thermodynamic properties of materials. We further show that using the RFF to empirically parametrize the two-body DCF allows us to obtain the thermodynamic properties of solids and liquids that agree with the results of CDFT simulations with the full two-body DCF without incurring significant computational costs. In addition, the RFF can also be used to improve the representation of the two-body DCF in the PFC method. Last, the RFF allows for a real-space reformulation of the CDFT and PFC method, which enables descriptions of nonperiodic systems and the use of nonuniform and adaptive grids.

  9. Econophysics — complex correlations and trend switchings in financial time series

    NASA Astrophysics Data System (ADS)

    Preis, T.

    2011-03-01

    This article focuses on the analysis of financial time series and their correlations. A method is used for quantifying pattern based correlations of a time series. With this methodology, evidence is found that typical behavioral patterns of financial market participants manifest over short time scales, i.e., that reactions to given price patterns are not entirely random, but that similar price patterns also cause similar reactions. Based on the investigation of the complex correlations in financial time series, the question arises, which properties change when switching from a positive trend to a negative trend. An empirical quantification by rescaling provides the result that new price extrema coincide with a significant increase in transaction volume and a significant decrease in the length of corresponding time intervals between transactions. These findings are independent of the time scale over 9 orders of magnitude, and they exhibit characteristics which one can also find in other complex systems in nature (and in physical systems in particular). These properties are independent of the markets analyzed. Trends that exist only for a few seconds show the same characteristics as trends on time scales of several months. Thus, it is possible to study financial bubbles and their collapses in more detail, because trend switching processes occur with higher frequency on small time scales. In addition, a Monte Carlo based simulation of financial markets is analyzed and extended in order to reproduce empirical features and to gain insight into their causes. These causes include both financial market microstructure and the risk aversion of market participants.

  10. The Index cohesive effect on stock market correlations

    NASA Astrophysics Data System (ADS)

    Shapira, Y.; Kenett, D. Y.; Ben-Jacob, E.

    2009-12-01

    We present empirical examination and reassessment of the functional role of the market Index, using datasets of stock returns for eight years, by analyzing and comparing the results for two very different markets: 1) the New York Stock Exchange (NYSE), representing a large, mature market, and 2) the Tel Aviv Stock Exchange (TASE), representing a small, young market. Our method includes special collective (holographic) analysis of stock-Index correlations, of nested stock correlations (including the Index as an additional ghost stock) and of bare stock correlations (after subtraction of the Index return from the stocks returns). Our findings verify and strongly substantiate the assumed functional role of the index in the financial system as a cohesive force between stocks, i.e., the correlations between stocks are largely due to the strong correlation between each stock and the Index (the adhesive effect), rather than inter-stock dependencies. The Index adhesive and cohesive effects on the market correlations in the two markets are presented and compared in a reduced 3-D principal component space of the correlation matrices (holographic presentation). The results provide new insights into the interplay between an index and its constituent stocks in TASE-like versus NYSE-like markets.

  11. A comparison of four streamflow record extension techniques

    USGS Publications Warehouse

    Hirsch, Robert M.

    1982-01-01

    One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., ‘line of organic correlation,’ ‘reduced major axis,’ ‘unique solution,’ and ‘equivalence line.’ The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.

  12. A Comparison of Four Streamflow Record Extension Techniques

    NASA Astrophysics Data System (ADS)

    Hirsch, Robert M.

    1982-08-01

    One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., `line of organic correlation,' `reduced major axis,' `unique solution,' and `equivalence line.' The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.

  13. Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Hines, Constance V.

    1995-01-01

    The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…

  14. Correlation of Apollo oxygen tank thermodynamic performance predictions

    NASA Technical Reports Server (NTRS)

    Patterson, H. W.

    1971-01-01

    Parameters necessary to analyze the stratified performance of the Apollo oxygen tanks include g levels, tank elasticity, flow rates and pressurized volumes. Methods for estimating g levels and flow rates from flight plans prior to flight, and from quidance and system data for use in the post flight analysis are described. Equilibrium thermodynamic equations are developed for the effects of tank elasticity and pressurized volumes on the tank pressure response and their relative magnitudes are discussed. Correlations of tank pressures and heater temperatures from flight data with the results of a stratification model are shown. Heater temperatures were also estimated with empirical heat transfer agreement with flight data when fluid properties were averaged rather than evaluated at the mean film temperature.

  15. Accuracy of p53 Codon 72 Polymorphism Status Determined by Multiple Laboratory Methods: A Latent Class Model Analysis

    PubMed Central

    Walter, Stephen D.; Riddell, Corinne A.; Rabachini, Tatiana; Villa, Luisa L.; Franco, Eduardo L.

    2013-01-01

    Introduction Studies on the association of a polymorphism in codon 72 of the p53 tumour suppressor gene (rs1042522) with cervical neoplasia have inconsistent results. While several methods for genotyping p53 exist, they vary in accuracy and are often discrepant. Methods We used latent class models (LCM) to examine the accuracy of six methods for p53 determination, all conducted by the same laboratory. We also examined the association of p53 with cytological cervical abnormalities, recognising potential test inaccuracy. Results Pairwise disagreement between laboratory methods occurred approximately 10% of the time. Given the estimated true p53 status of each woman, we found that each laboratory method is most likely to classify a woman to her correct status. Arg/Arg women had the highest risk of squamous intraepithelial lesions (SIL). Test accuracy was independent of cytology. There was no strong evidence for correlations of test errors. Discussion Empirical analyses ignore possible laboratory errors, and so are inherently biased, but test accuracy estimated by the LCM approach is unbiased when model assumptions are met. LCM analysis avoids ambiguities arising from empirical test discrepancies, obviating the need to regard any of the methods as a “gold” standard measurement. The methods we presented here to analyse the p53 data can be applied in many other situations where multiple tests exist, but where none of them is a gold standard. PMID:23441193

  16. A Sector Capacity Assessment Method Based on Airspace Utilization Efficiency

    NASA Astrophysics Data System (ADS)

    Zhang, Jianping; Zhang, Ping; Li, Zhen; Zou, Xiang

    2018-02-01

    Sector capacity is one of the core factors affecting the safety and the efficiency of the air traffic system. Most of previous sector capacity assessment methods only considered the air traffic controller’s (ATCO’s) workload. These methods are not only limited which only concern about the safety, but also not accurate enough. In this paper, we employ the integrated quantitative index system proposed in one of our previous literatures. We use the principal component analysis (PCA) to find out the principal indicators among the indicators so as to calculate the airspace utilization efficiency. In addition, we use a series of fitting functions to test and define the correlation between the dense of air traffic flow and the airspace utilization efficiency. The sector capacity is then decided as the value of the dense of air traffic flow corresponding to the maximum airspace utilization efficiency. We also use the same series of fitting functions to test the correlation between the dese of air traffic flow and the ATCOs’ workload. We examine our method with a large amount of empirical operating data of Chengdu Controlling Center and obtain a reliable sector capacity value. Experiment results also show superiority of our method against those only consider the ATCO’s workload in terms of better correlation between the airspace utilization efficiency and the dense of air traffic flow.

  17. Improved Design of Tunnel Supports : Volume 5 : Empirical Methods in Rock Tunneling -- Review and Recommendations

    DOT National Transportation Integrated Search

    1980-06-01

    Volume 5 evaluates empirical methods in tunneling. Empirical methods that avoid the use of an explicit model by relating ground conditions to observed prototype behavior have played a major role in tunnel design. The main objective of this volume is ...

  18. The Research on the Factors of Purchase Intention for Fresh Agricultural Products in an E-Commerce Environment

    NASA Astrophysics Data System (ADS)

    Han, Dan; Mu, Jing

    2017-12-01

    Based on the characteristics of e-commerce of fresh agricultural products in China, and using the correlation analysis method, the relational model between product knowledge, perceived benefit, perceived risk and purchase intention is constructed. The Logistic model is used to carry in the empirical analysis. The influence factors and the mechanism of online purchase intention are explored. The results show that consumers’ product knowledge, perceived benefit and perceived risk can affect their purchase intention. Consumers’ product knowledge has a positive effect on perceived benefit and perceived benefit has a positive effect on purchase intention. Consumers’ product knowledge has a negative effect on perceived risk, and perceived profit has a negative effect on perceived risk, and perceived risk has a negative effect on purchase intention. Through the empirical analysis, some feasible suggestions for the government and electricity supplier enterprises can be provided.

  19. Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods

    NASA Astrophysics Data System (ADS)

    Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.

    2017-04-01

    In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.

  20. Dynamic correlations at different time-scales with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Nava, Noemi; Di Matteo, T.; Aste, Tomaso

    2018-07-01

    We introduce a simple approach which combines Empirical Mode Decomposition (EMD) and Pearson's cross-correlations over rolling windows to quantify dynamic dependency at different time scales. The EMD is a tool to separate time series into implicit components which oscillate at different time-scales. We apply this decomposition to intraday time series of the following three financial indices: the S&P 500 (USA), the IPC (Mexico) and the VIX (volatility index USA), obtaining time-varying multidimensional cross-correlations at different time-scales. The correlations computed over a rolling window are compared across the three indices, across the components at different time-scales and across different time lags. We uncover a rich heterogeneity of interactions, which depends on the time-scale and has important lead-lag relations that could have practical use for portfolio management, risk estimation and investment decisions.

  1. Price-volume multifractal analysis and its application in Chinese stock markets

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Liu, Zhi-ying

    2012-06-01

    An empirical research on Chinese stock markets is conducted using statistical tools. First, the multifractality of stock price return series, ri(ri=ln(Pt+1)-ln(Pt)) and trading volume variation series, vi(vi=ln(Vt+1)-ln(Vt)) is confirmed using multifractal detrended fluctuation analysis. Furthermore, a multifractal detrended cross-correlation analysis between stock price return and trading volume variation in Chinese stock markets is also conducted. It is shown that the cross relationship between them is also found to be multifractal. Second, the cross-correlation between stock price Pi and trading volume Vi is empirically studied using cross-correlation function and detrended cross-correlation analysis. It is found that both Shanghai stock market and Shenzhen stock market show pronounced long-range cross-correlations between stock price and trading volume. Third, a composite index R based on price and trading volume is introduced. Compared with stock price return series ri and trading volume variation series vi, R variation series not only remain the characteristics of original series but also demonstrate the relative correlation between stock price and trading volume. Finally, we analyze the multifractal characteristics of R variation series before and after three financial events in China (namely, Price Limits, Reform of Non-tradable Shares and financial crisis in 2008) in the whole period of sample to study the changes of stock market fluctuation and financial risk. It is found that the empirical results verified the validity of R.

  2. Comment: Spurious Correlation and Other Observations on Experimental Design for Engineering Dimensional Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.

    2013-08-01

    This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also,more » the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.« less

  3. A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong

    2001-01-01

    This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.

  4. Evaluating the impact of sea surface temperature (SST) on spatial distribution of chlorophyll-a concentration in the East China Sea

    NASA Astrophysics Data System (ADS)

    Ji, Chenxu; Zhang, Yuanzhi; Cheng, Qiuming; Tsou, JinYeu; Jiang, Tingchen; Liang, X. San

    2018-06-01

    In this study, we analyze spatial and temporal sea surface temperature (SST) and chlorophylla (Chl-a) concentration in the East China Sea (ECS) during the period 2003-2016. Level 3 (4 km) monthly SST and Chl-a data from the Moderate Resolution Imaging Spectroradiometer Satellite (MODIS-Aqua) were reconstructed using the data interpolation empirical orthogonal function (DINEOF) method and used to evaluated the relationship between the two variables. The approaches employed included correlation analysis, regression analysis, and so forth. Our results show that certain strong oceanic SSTs affect Chl-a concentration, with particularly high correlation seen in the coastal area of Jiangsu and Zhejiang provinces. The mean temperature of the high correlated region was 18.67 °C. This finding may suggest that the SST has an important impact on the spatial distribution of Chl-a concentration in the ECS.

  5. CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS

    PubMed Central

    Zhong, Wenxuan; Zhang, Tingting; Zhu, Yu; Liu, Jun S.

    2012-01-01

    In this article, a stepwise procedure, correlation pursuit (COP), is developed for variable selection under the sufficient dimension reduction framework, in which the response variable Y is influenced by the predictors X1, X2, …, Xp through an unknown function of a few linear combinations of them. Unlike linear stepwise regression, COP does not impose a special form of relationship (such as linear) between the response variable and the predictor variables. The COP procedure selects variables that attain the maximum correlation between the transformed response and the linear combination of the variables. Various asymptotic properties of the COP procedure are established, and in particular, its variable selection performance under diverging number of predictors and sample size has been investigated. The excellent empirical performance of the COP procedure in comparison with existing methods are demonstrated by both extensive simulation studies and a real example in functional genomics. PMID:23243388

  6. Inference With Difference-in-Differences With a Small Number of Groups: A Review, Simulation Study, and Empirical Application Using SHARE Data.

    PubMed

    Rokicki, Slawa; Cohen, Jessica; Fink, Günther; Salomon, Joshua A; Landrum, Mary Beth

    2018-01-01

    Difference-in-differences (DID) estimation has become increasingly popular as an approach to evaluate the effect of a group-level policy on individual-level outcomes. Several statistical methodologies have been proposed to correct for the within-group correlation of model errors resulting from the clustering of data. Little is known about how well these corrections perform with the often small number of groups observed in health research using longitudinal data. First, we review the most commonly used modeling solutions in DID estimation for panel data, including generalized estimating equations (GEE), permutation tests, clustered standard errors (CSE), wild cluster bootstrapping, and aggregation. Second, we compare the empirical coverage rates and power of these methods using a Monte Carlo simulation study in scenarios in which we vary the degree of error correlation, the group size balance, and the proportion of treated groups. Third, we provide an empirical example using the Survey of Health, Ageing, and Retirement in Europe. When the number of groups is small, CSE are systematically biased downwards in scenarios when data are unbalanced or when there is a low proportion of treated groups. This can result in over-rejection of the null even when data are composed of up to 50 groups. Aggregation, permutation tests, bias-adjusted GEE, and wild cluster bootstrap produce coverage rates close to the nominal rate for almost all scenarios, though GEE may suffer from low power. In DID estimation with a small number of groups, analysis using aggregation, permutation tests, wild cluster bootstrap, or bias-adjusted GEE is recommended.

  7. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  8. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical.

    PubMed

    Baaquie, Belal E; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  9. A Physically Motivated and Empirically Calibrated Method to Measure the Effective Temperature, Metallicity, and Ti Abundance of M Dwarfs

    NASA Astrophysics Data System (ADS)

    Veyette, Mark J.; Muirhead, Philip S.; Mann, Andrew W.; Brewer, John M.; Allard, France; Homeier, Derek

    2017-12-01

    The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications, including studying the chemical evolution of the Galaxy and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres hinders similar analyses of M dwarf stars. Empirically calibrated methods to measure M dwarf metallicity from moderate-resolution spectra are currently limited to measuring overall metallicity and rely on astrophysical abundance correlations in stellar populations. We present a new, empirical calibration of synthetic M dwarf spectra that can be used to infer effective temperature, Fe abundance, and Ti abundance. We obtained high-resolution (R ˜ 25,000), Y-band (˜1 μm) spectra of 29 M dwarfs with NIRSPEC on Keck II. Using the PHOENIX stellar atmosphere modeling code (version 15.5), we generated a grid of synthetic spectra covering a range of temperatures, metallicities, and alpha-enhancements. From our observed and synthetic spectra, we measured the equivalent widths of multiple Fe I and Ti I lines and a temperature-sensitive index based on the FeH band head. We used abundances measured from widely separated solar-type companions to empirically calibrate transformations to the observed indices and equivalent widths that force agreement with the models. Our calibration achieves precisions in T eff, [Fe/H], and [Ti/Fe] of 60 K, 0.1 dex, and 0.05 dex, respectively, and is calibrated for 3200 K < T eff < 4100 K, -0.7 < [Fe/H] < +0.3, and -0.05 < [Ti/Fe] < +0.3. This work is a step toward detailed chemical analysis of M dwarfs at a precision similar to what has been achieved for FGK stars.

  10. A discrete structure of the brain waves.

    NASA Astrophysics Data System (ADS)

    Dabaghian, Yuri; Perotti, Luca; oscillons in biological rhythms Collaboration; physics of biological rhythms Team

    A physiological interpretation of the biological rhythms, e.g., of the local field potentials (LFP) depends on the mathematical approaches used for the analysis. Most existing mathematical methods are based on decomposing the signal into a set of ``primitives,'' e.g., sinusoidal harmonics, and correlating them with different cognitive and behavioral phenomena. A common feature of all these methods is that the decomposition semantics is presumed from the onset, and the goal of the subsequent analysis reduces merely to identifying the combination that best reproduces the original signal. We propose a fundamentally new method in which the decomposition components are discovered empirically, and demonstrate that it is more flexible and more sensitive to the signal's structure than the standard Fourier method. Applying this method to the rodent LFP signals reveals a fundamentally new structure of these ``brain waves.'' In particular, our results suggest that the LFP oscillations consist of a superposition of a small, discrete set of frequency modulated oscillatory processes, which we call ``oscillons''. Since these structures are discovered empirically, we hypothesize that they may capture the signal's actual physical structure, i.e., the pattern of synchronous activity in neuronal ensembles. Proving this hypothesis will help to advance our principal understanding of the neuronal synchronization mechanisms and reveal new structure within the LFPs and other biological oscillations. NSF 1422438 Grant, Houston Bioinformatics Endowment Fund.

  11. Enhancement of lung sounds based on empirical mode decomposition and Fourier transform algorithm.

    PubMed

    Mondal, Ashok; Banerjee, Poulami; Somkuwar, Ajay

    2017-02-01

    There is always heart sound (HS) signal interfering during the recording of lung sound (LS) signals. This obscures the features of LS signals and creates confusion on pathological states, if any, of the lungs. In this work, a new method is proposed for reduction of heart sound interference which is based on empirical mode decomposition (EMD) technique and prediction algorithm. In this approach, first the mixed signal is split into several components in terms of intrinsic mode functions (IMFs). Thereafter, HS-included segments are localized and removed from them. The missing values of the gap thus produced, is predicted by a new Fast Fourier Transform (FFT) based prediction algorithm and the time domain LS signal is reconstructed by taking an inverse FFT of the estimated missing values. The experiments have been conducted on simulated and recorded HS corrupted LS signals at three different flow rates and various SNR levels. The performance of the proposed method is evaluated by qualitative and quantitative analysis of the results. It is found that the proposed method is superior to the baseline method in terms of quantitative and qualitative measurement. The developed method gives better results compared to baseline method for different SNR levels. Our method gives cross correlation index (CCI) of 0.9488, signal to deviation ratio (SDR) of 9.8262, and normalized maximum amplitude error (NMAE) of 26.94 for 0 dB SNR value. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. How Volatilities Nonlocal in Time Affect the Price Dynamics in Complex Financial Systems

    PubMed Central

    Tan, Lei; Zheng, Bo; Chen, Jun-Jie; Jiang, Xiong-Fei

    2015-01-01

    What is the dominating mechanism of the price dynamics in financial systems is of great interest to scientists. The problem whether and how volatilities affect the price movement draws much attention. Although many efforts have been made, it remains challenging. Physicists usually apply the concepts and methods in statistical physics, such as temporal correlation functions, to study financial dynamics. However, the usual volatility-return correlation function, which is local in time, typically fluctuates around zero. Here we construct dynamic observables nonlocal in time to explore the volatility-return correlation, based on the empirical data of hundreds of individual stocks and 25 stock market indices in different countries. Strikingly, the correlation is discovered to be non-zero, with an amplitude of a few percent and a duration of over two weeks. This result provides compelling evidence that past volatilities nonlocal in time affect future returns. Further, we introduce an agent-based model with a novel mechanism, that is, the asymmetric trading preference in volatile and stable markets, to understand the microscopic origin of the volatility-return correlation nonlocal in time. PMID:25723154

  13. Inferring functional connectivity in MRI using Bayesian network structure learning with a modified PC algorithm

    PubMed Central

    Iyer, Swathi; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel; Fair, Damien

    2013-01-01

    Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al (2011), and apply a Bayesian approach called the PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations. PMID:23501054

  14. How rational should bioethics be? The value of empirical approaches.

    PubMed

    Alvarez, A A

    2001-10-01

    Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.

  15. Sea level side loads in high-area-ratio rocket engines

    NASA Technical Reports Server (NTRS)

    Nave, L. H.; Coffey, G. A.

    1973-01-01

    An empirical separation and side load model to obtain applied aerodynamic loads has been developed based on data obtained from full-scale J-2S (265K-pound-thrust engine with an area ratio of 40:1) engine and model testing. Experimental data include visual observations of the separation patterns that show the dynamic nature of the separation phenomenon. Comparisons between measured and applied side loads are made. Correlations relating the separation location to the applied side loads and the methods used to determine the separation location are given.

  16. Viscosity studies of water based magnetite nanofluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anu, K.; Hemalatha, J.

    2016-05-23

    Magnetite nanofluids of various concentrations have been synthesized through co-precipitation method. The structural and topographical studies made with the X-Ray Diffractometer and Atomic Force Microscope are presented in this paper. The density and viscosity studies for the ferrofluids of various concentrations have been made at room temperature. The experimental viscosities are compared with theoretical values obtained from Einstein, Batchelor and Wang models. An attempt to modify the Rosensweig model is made and the modified Rosensweig equation is reported. In addition, new empirical correlation is also proposed for predicting viscosity of ferrofluid at various concentrations.

  17. Retaining Early Childhood Education Workers: A Review of the Empirical Literature

    ERIC Educational Resources Information Center

    Totenhagen, Casey J.; Hawkins, Stacy Ann; Casper, Deborah M.; Bosch, Leslie A.; Hawkey, Kyle R.; Borden, Lynne M.

    2016-01-01

    Low retention in the child care workforce is a persistent challenge that has been associated with negative outcomes for children, staff, and centers. This article reviews the empirical literature, identifying common correlates or predictors of retention for child care workers. Searches were conducted using several databases, and articles that…

  18. Empirical data and the variance-covariance matrix for the 1969 Smithsonian Standard Earth (2)

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1972-01-01

    The empirical data used in the 1969 Smithsonian Standard Earth (2) are presented. The variance-covariance matrix, or the normal equations, used for correlation analysis, are considered. The format and contents of the matrix, available on magnetic tape, are described and a sample printout is given.

  19. Stable distribution and long-range correlation of Brent crude oil market

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Jin, Xiu; Huang, Wei-qiang

    2014-11-01

    An empirical study of stable distribution and long-range correlation in Brent crude oil market was presented. First, it is found that the empirical distribution of Brent crude oil returns can be fitted well by a stable distribution, which is significantly different from a normal distribution. Second, the detrended fluctuation analysis for the Brent crude oil returns shows that there are long-range correlation in returns. It implies that there are patterns or trends in returns that persist over time. Third, the detrended fluctuation analysis for the Brent crude oil returns shows that after the financial crisis 2008, the Brent crude oil market becomes more persistence. It implies that the financial crisis 2008 could increase the frequency and strength of the interdependence and correlations between the financial time series. All of these findings may be used to improve the current fractal theories.

  20. On the galaxy-halo connection in the EAGLE simulation

    NASA Astrophysics Data System (ADS)

    Desmond, Harry; Mao, Yao-Yuan; Wechsler, Risa H.; Crain, Robert A.; Schaye, Joop

    2017-10-01

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass-size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy-halo connection it implies. We find the EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.

  1. An efficient genome-wide association test for mixed binary and continuous phenotypes with applications to substance abuse research.

    PubMed

    Buu, Anne; Williams, L Keoki; Yang, James J

    2018-03-01

    We propose a new genome-wide association test for mixed binary and continuous phenotypes that uses an efficient numerical method to estimate the empirical distribution of the Fisher's combination statistic under the null hypothesis. Our simulation study shows that the proposed method controls the type I error rate and also maintains its power at the level of the permutation method. More importantly, the computational efficiency of the proposed method is much higher than the one of the permutation method. The simulation results also indicate that the power of the test increases when the genetic effect increases, the minor allele frequency increases, and the correlation between responses decreases. The statistical analysis on the database of the Study of Addiction: Genetics and Environment demonstrates that the proposed method combining multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests.

  2. A consistent hierarchy of generalized kinetic equation approximations to the master equation applied to surface catalysis.

    PubMed

    Herschlag, Gregory J; Mitran, Sorin; Lin, Guang

    2015-06-21

    We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite-range spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one- and two-dimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods (KMC) for one-dimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a two-dimensional model for the oxidation of CO on RuO2(110), showing that low-order truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.

  3. An Empirical Correction Method for Improving off-Axes Response Prediction in Component Type Flight Mechanics Helicopter Models

    NASA Technical Reports Server (NTRS)

    Mansur, M. Hossein; Tischler, Mark B.

    1997-01-01

    Historically, component-type flight mechanics simulation models of helicopters have been unable to satisfactorily predict the roll response to pitch stick input and the pitch response to roll stick input off-axes responses. In the study presented here, simple first-order low-pass filtering of the elemental lift and drag forces was considered as a means of improving the correlation. The method was applied to a blade-element model of the AH-64 APache, and responses of the modified model were compared with flight data in hover and forward flight. Results indicate that significant improvement in the off-axes responses can be achieved in hover. In forward flight, however, the best correlation in the longitudinal and lateral off-axes responses required different values of the filter time constant for each axis. A compromise value was selected and was shown to result in good overall improvement in the off-axes responses. The paper describes both the method and the model used for its implementation, and presents results obtained at hover and in forward flight.

  4. Rare k-mer DNA: Identification of sequence motifs and prediction of CpG island and promoter.

    PubMed

    Mohamed Hashim, Ezzeddin Kamil; Abdullah, Rosni

    2015-12-21

    Empirical analysis on k-mer DNA has been proven as an effective tool in finding unique patterns in DNA sequences which can lead to the discovery of potential sequence motifs. In an extensive study of empirical k-mer DNA on hundreds of organisms, the researchers found unique multi-modal k-mer spectra occur in the genomes of organisms from the tetrapod clade only which includes all mammals. The multi-modality is caused by the formation of the two lowest modes where k-mers under them are referred as the rare k-mers. The suppression of the two lowest modes (or the rare k-mers) can be attributed to the CG dinucleotide inclusions in them. Apart from that, the rare k-mers are selectively distributed in certain genomic features of CpG Island (CGI), promoter, 5' UTR, and exon. We correlated the rare k-mers with hundreds of annotated features using several bioinformatic tools, performed further intrinsic rare k-mer analyses within the correlated features, and modeled the elucidated rare k-mer clustering feature into a classifier to predict the correlated CGI and promoter features. Our correlation results show that rare k-mers are highly associated with several annotated features of CGI, promoter, 5' UTR, and open chromatin regions. Our intrinsic results show that rare k-mers have several unique topological, compositional, and clustering properties in CGI and promoter features. Finally, the performances of our RWC (rare-word clustering) method in predicting the CGI and promoter features are ranked among the top three, in eight of the CGI and promoter evaluations, among eight of the benchmarked datasets. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  5. Data mining in forecasting PVT correlations of crude oil systems based on Type1 fuzzy logic inference systems

    NASA Astrophysics Data System (ADS)

    El-Sebakhy, Emad A.

    2009-09-01

    Pressure-volume-temperature properties are very important in the reservoir engineering computations. There are many empirical approaches for predicting various PVT properties based on empirical correlations and statistical regression models. Last decade, researchers utilized neural networks to develop more accurate PVT correlations. These achievements of neural networks open the door to data mining techniques to play a major role in oil and gas industry. Unfortunately, the developed neural networks correlations are often limited, and global correlations are usually less accurate compared to local correlations. Recently, adaptive neuro-fuzzy inference systems have been proposed as a new intelligence framework for both prediction and classification based on fuzzy clustering optimization criterion and ranking. This paper proposes neuro-fuzzy inference systems for estimating PVT properties of crude oil systems. This new framework is an efficient hybrid intelligence machine learning scheme for modeling the kind of uncertainty associated with vagueness and imprecision. We briefly describe the learning steps and the use of the Takagi Sugeno and Kang model and Gustafson-Kessel clustering algorithm with K-detected clusters from the given database. It has featured in a wide range of medical, power control system, and business journals, often with promising results. A comparative study will be carried out to compare their performance of this new framework with the most popular modeling techniques, such as neural networks, nonlinear regression, and the empirical correlations algorithms. The results show that the performance of neuro-fuzzy systems is accurate, reliable, and outperform most of the existing forecasting techniques. Future work can be achieved by using neuro-fuzzy systems for clustering the 3D seismic data, identification of lithofacies types, and other reservoir characterization.

  6. Pleiotropy of cardiometabolic syndrome with obesity-related anthropometric traits determined using empirically derived kinships from the Busselton Health Study.

    PubMed

    Cadby, Gemma; Melton, Phillip E; McCarthy, Nina S; Almeida, Marcio; Williams-Blangero, Sarah; Curran, Joanne E; VandeBerg, John L; Hui, Jennie; Beilby, John; Musk, A W; James, Alan L; Hung, Joseph; Blangero, John; Moses, Eric K

    2018-01-01

    Over two billion adults are overweight or obese and therefore at an increased risk of cardiometabolic syndrome (CMS). Obesity-related anthropometric traits genetically correlated with CMS may provide insight into CMS aetiology. The aim of this study was to utilise an empirically derived genetic relatedness matrix to calculate heritabilities and genetic correlations between CMS and anthropometric traits to determine whether they share genetic risk factors (pleiotropy). We used genome-wide single nucleotide polymorphism (SNP) data on 4671 Busselton Health Study participants. Exploiting both known and unknown relatedness, empirical kinship probabilities were estimated using these SNP data. General linear mixed models implemented in SOLAR were used to estimate narrow-sense heritabilities (h 2 ) and genetic correlations (r g ) between 15 anthropometric and 9 CMS traits. Anthropometric traits were adjusted by body mass index (BMI) to determine whether the observed genetic correlation was independent of obesity. After adjustment for multiple testing, all CMS and anthropometric traits were significantly heritable (h 2 range 0.18-0.57). We identified 50 significant genetic correlations (r g range: - 0.37 to 0.75) between CMS and anthropometric traits. Five genetic correlations remained significant after adjustment for BMI [high density lipoprotein cholesterol (HDL-C) and waist-hip ratio; triglycerides and waist-hip ratio; triglycerides and waist-height ratio; non-HDL-C and waist-height ratio; insulin and iliac skinfold thickness]. This study provides evidence for the presence of potentially pleiotropic genes that affect both anthropometric and CMS traits, independently of obesity.

  7. Empirical Modeling of the Statistical Structure of Radio Signals from Satellites Moving over Mid- and High-Latitude Trajectories in the Southern Hemisphere

    NASA Astrophysics Data System (ADS)

    Fatkullin, M. N.; Solodovnikov, G. K.; Trubitsyn, V. M.

    2004-01-01

    The results of developing the empirical model of parameters of radio signals propagating in the inhomogeneous ionosphere at middle and high latitudes are presented. As the initial data we took the homogeneous data obtained as a result of observations carried out at the Antarctic ``Molodezhnaya'' station by the method of continuous transmission probing of the ionosphere by signals of the satellite radionavigation ``Transit'' system at coherent frequencies of 150 and 400 MHz. The data relate to the summer season period in the Southern hemisphere of the Earth in 1988-1989 during high (F > 160) activity of the Sun. The behavior of the following statistical characteristics of radio signal parameters was analyzed: (a) the interval of correlation of fluctuations of amplitudes at a frequency of 150 MHz (τkA) (b) the interval of correlation of fluctuations of the difference phase (τkϕ) and (c) the parameter characterizing frequency spectra of amplitude (PA) and phase (Pϕ) fluctuations. A third-degree polynomial was used for modeling of propagation parameters. For all above indicated propagation parameters, the coefficients of the third-degree polynomial were calculated as a function of local time and magnetic activity. The results of calculations are tabulated.

  8. Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Xu, L.; Peng, J.

    2018-04-01

    Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.

  9. An Alternate Method for Estimating Dynamic Height from XBT Profiles Using Empirical Vertical Modes

    NASA Technical Reports Server (NTRS)

    Lagerloef, Gary S. E.

    1994-01-01

    A technique is presented that applies modal decomposition to estimate dynamic height (0-450 db) from Expendable BathyThermograph (XBT) temperature profiles. Salinity-Temperature-Depth (STD) data are used to establish empirical relationships between vertically integrated temperature profiles and empirical dynamic height modes. These are then applied to XBT data to estimate dynamic height. A standard error of 0.028 dynamic meters is obtained for the waters of the Gulf of Alaska- an ocean region subject to substantial freshwater buoyancy forcing and with a T-S relationship that has considerable scatter. The residual error is a substantial improvement relative to the conventional T-S correlation technique when applied to this region. Systematic errors between estimated and true dynamic height were evaluated. The 20-year-long time series at Ocean Station P (50 deg N, 145 deg W) indicated weak variations in the error interannually, but not seasonally. There were no evident systematic alongshore variations in the error in the ocean boundary current regime near the perimeter of the Alaska gyre. The results prove satisfactory for the purpose of this work, which is to generate dynamic height from XBT data for coanalysis with satellite altimeter data, given that the altimeter height precision is likewise on the order of 2-3 cm. While the technique has not been applied to other ocean regions where the T-S relation has less scatter, it is suggested that it could provide some improvement over previously applied methods, as well.

  10. Characteristics of propeller noise on an aircraft fuselage related to interior noise transmission

    NASA Technical Reports Server (NTRS)

    Mixson, J. S.; Barton, C. K.; Piersol, A. G.; Wilby, J. F.

    1979-01-01

    Exterior noise was measured on the fuselage of a twin-engine, light aircraft at four values of engine rpm in ground static tests and at forward speeds up to 36 m/s in taxi tests. Propeller noise levels, spectra, and correlations were determined using a horizontal array of seven flush-mounted microphones and a vertical array of four flush-mounted microphones in the propeller plane. The measured levels and spectra are compared with predictions based on empirical and analytical methods for static and taxi conditions. Trace wavelengths of the propeller noise field, obtained from point-to-point correlations, are compared with the aircraft sidewall structural dimensions, and some analytical results are presented that suggest the sensitivity of interior noise transmission to variations of the propeller noise characteristics.

  11. Empirical analysis on future-cash arbitrage risk with portfolio VaR

    NASA Astrophysics Data System (ADS)

    Chen, Rongda; Li, Cong; Wang, Weijin; Wang, Ze

    2014-03-01

    This paper constructs the positive arbitrage position by alternating the spot index with Chinese Exchange Traded Fund (ETF) portfolio and estimating the arbitrage-free interval of futures with the latest trade data. Then, an improved Delta-normal method was used, which replaces the simple linear correlation coefficient with tail dependence correlation coefficient, to measure VaR (Value-at-risk) of the arbitrage position. Analysis of VaR implies that the risk of future-cash arbitrage is less than that of investing completely in either futures or spot market. Then according to the compositional VaR and the marginal VaR, we should increase the futures position and decrease the spot position appropriately to minimize the VaR, which can minimize risk subject to certain revenues.

  12. Competition in health insurance markets: limitations of current measures for policy analysis.

    PubMed

    Scanlon, Dennis P; Chernew, Michael; Swaminathan, Shailender; Lee, Woolton

    2006-12-01

    Health care reform proposals often rely on increased competition in health insurance markets to drive improved performance in health care costs, access, and quality. We examine a range of data issues related to the measures of health insurance competition used in empirical studies published from 1994-2004. The literature relies exclusively on market structure and penetration variables to measure competition. While these measures are correlated, the degree of correlation is modest, suggesting that choice of measure could influence empirical results. Moreover, certain measurement issues such as the lack of data on PPO enrollment, the treatment of small firms, and omitted market characteristics also could affect the conclusions in empirical studies. Importantly, other types of measures related to competition (e.g., the availability of information on price and outcomes, degree of entry barriers, etc.) are important from both a theoretical and policy perspective, but their impact on market outcomes has not been widely studied.

  13. Completed Ensemble Empirical Mode Decomposition: a Robust Signal Processing Tool to Identify Sequence Strata

    NASA Astrophysics Data System (ADS)

    Purba, H.; Musu, J. T.; Diria, S. A.; Permono, W.; Sadjati, O.; Sopandi, I.; Ruzi, F.

    2018-03-01

    Well logging data provide many geological information and its trends resemble nonlinear or non-stationary signals. As long well log data recorded, there will be external factors can interfere or influence its signal resolution. A sensitive signal analysis is required to improve the accuracy of logging interpretation which it becomes an important thing to determine sequence stratigraphy. Complete Ensemble Empirical Mode Decomposition (CEEMD) is one of nonlinear and non-stationary signal analysis method which decomposes complex signal into a series of intrinsic mode function (IMF). Gamma Ray and Spontaneous Potential well log parameters decomposed into IMF-1 up to IMF-10 and each of its combination and correlation makes physical meaning identification. It identifies the stratigraphy and cycle sequence and provides an effective signal treatment method for sequence interface. This method was applied to BRK- 30 and BRK-13 well logging data. The result shows that the combination of IMF-5, IMF-6, and IMF-7 pattern represent short-term and middle-term while IMF-9 and IMF-10 represent the long-term sedimentation which describe distal front and delta front facies, and inter-distributary mouth bar facies, respectively. Thus, CEEMD clearly can determine the different sedimentary layer interface and better identification of the cycle of stratigraphic base level.

  14. Calibrating Detailed Chemical Analysis of M dwarfs

    NASA Astrophysics Data System (ADS)

    Veyette, Mark; Muirhead, Philip Steven; Mann, Andrew; Brewer, John; Allard, France; Homeier, Derek

    2018-01-01

    The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications including studying the chemical evolution of the Galaxy, assessing membership in stellar kinematic groups, and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres has hindered similar analysis of M-dwarf stars. Large surveys of FGK abundances play an important role in developing methods to measure the compositions of M dwarfs by providing benchmark FGK stars that have widely-separated M dwarf companions. These systems allow us to empirically calibrate metallicity-sensitive features in M dwarf spectra. However, current methods to measure metallicity in M dwarfs from moderate-resolution spectra are limited to measuring overall metallicity and largely rely on astrophysical abundance correlations in stellar populations. In this talk, I will discuss how large, homogeneous catalogs of precise FGK abundances are crucial to advancing chemical analysis of M dwarfs beyond overall metallicity to direct measurements of individual elemental abundances. I will present a new method to analyze high-resolution, NIR spectra of M dwarfs that employs an empirical calibration of synthetic M dwarf spectra to infer effective temperature, Fe abundance, and Ti abundance. This work is a step toward detailed chemical analysis of M dwarfs at a similar precision achieved for FGK stars.

  15. Experimental investigation of heat transfer coefficient of mini-channel PCHE (printed circuit heat exchanger)

    NASA Astrophysics Data System (ADS)

    Kwon, Dohoon; Jin, Lingxue; Jung, WooSeok; Jeong, Sangkwon

    2018-06-01

    Heat transfer coefficient of a mini-channel printed circuit heat exchanger (PCHE) with counter-flow configuration is investigated. The PCHE used in the experiments is two layered (10 channels per layer) and has the hydraulic diameter of 1.83 mm. Experiments are conducted under various cryogenic heat transfer conditions: single-phase, boiling and condensation heat transfer. Heat transfer coefficients of each experiments are presented and compared with established correlations. In the case of the single-phase experiment, empiricial correlation of modified Dittus-Boelter correlation was proposed, which predicts the experimental results with 5% error at Reynolds number range from 8500 to 17,000. In the case of the boiling experiment, film boiling phenomenon occurred dominantly due to large temperature difference between the hot side and the cold side fluids. Empirical correlation is proposed which predicts experimental results with 20% error at Reynolds number range from 2100 to 2500. In the case of the condensation experiment, empirical correlation of modified Akers correlation was proposed, which predicts experimental results with 10% error at Reynolds number range from 3100 to 6200.

  16. Prediction of friction coefficients for gases

    NASA Technical Reports Server (NTRS)

    Taylor, M. F.

    1969-01-01

    Empirical relations are used for correlating laminar and turbulent friction coefficients for gases, with large variations in the physical properties, flowing through smooth tubes. These relations have been used to correlate friction coefficients for hydrogen, helium, nitrogen, carbon dioxide and air.

  17. Semi-empirical correlation for binary interaction parameters of the Peng-Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor-liquid equilibrium.

    PubMed

    Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O

    2013-03-01

    Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  18. Solar radiation over Egypt: Comparison of predicted and measured meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamel, M.A.; Shalaby, S.A.; Mostafa, S.S.

    1993-06-01

    Measurements of global solar irradiance on a horizontal surface at five meteorological stations in Egypt for three years 1987, 1988, and 1989 are compared with their corresponding values computed by two independent methods. The first method is based on the Angstrom formula, which correlates relative solar irradiance H/H[sub o] to corresponding relative duration of bright sunshine n/N. Regional regression coefficients are obtained and used for prediction of global solar irradiance. Good agreement with measurements is obtained. In the second method an empirical relation, in which sunshine duration and the noon altitude of the sun as inputs together with appropriate choicemore » of zone parameters, is employed. This gives good agreement with the measurements. Comparison shows that the first method gives better fitting with the experimental data.« less

  19. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  20. Recharge signal identification based on groundwater level observations.

    PubMed

    Yu, Hwa-Lung; Chu, Hone-Jay

    2012-10-01

    This study applied a method of the rotated empirical orthogonal functions to directly decompose the space-time groundwater level variations and determine the potential recharge zones by investigating the correlation between the identified groundwater signals and the observed local rainfall records. The approach is used to analyze the spatiotemporal process of piezometric heads estimated by Bayesian maximum entropy method from monthly observations of 45 wells in 1999-2007 located in the Pingtung Plain of Taiwan. From the results, the primary potential recharge area is located at the proximal fan areas where the recharge process accounts for 88% of the spatiotemporal variations of piezometric heads in the study area. The decomposition of groundwater levels associated with rainfall can provide information on the recharge process since rainfall is an important contributor to groundwater recharge in semi-arid regions. Correlation analysis shows that the identified recharge closely associates with the temporal variation of the local precipitation with a delay of 1-2 months in the study area.

  1. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    PubMed

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  2. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    NASA Astrophysics Data System (ADS)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  3. Adaptive Filtration of Physiological Artifacts in EEG Signals in Humans Using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, V. V.; Runnova, A. E.; Hramov, A. E.

    2018-05-01

    A new method for adaptive filtration of experimental EEG signals in humans and for removal of different physiological artifacts has been proposed. The algorithm of the method includes empirical mode decomposition of EEG, determination of the number of empirical modes that are considered, analysis of the empirical modes and search for modes that contains artifacts, removal of these modes, and reconstruction of the EEG signal. The method was tested on experimental human EEG signals and demonstrated high efficiency in the removal of different types of physiological EEG artifacts.

  4. A Comparison of Full and Empirical Bayes Techniques for Inferring Sea Level Changes from Tide Gauge Records

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Tingley, M.

    2016-12-01

    Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.

  5. Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces.

    PubMed

    Abu-Alqumsan, Mohammad; Peer, Angelika

    2016-06-01

    Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.

  6. Spurious cross-frequency amplitude-amplitude coupling in nonstationary, nonlinear signals

    NASA Astrophysics Data System (ADS)

    Yeh, Chien-Hung; Lo, Men-Tzung; Hu, Kun

    2016-07-01

    Recent studies of brain activities show that cross-frequency coupling (CFC) plays an important role in memory and learning. Many measures have been proposed to investigate the CFC phenomenon, including the correlation between the amplitude envelopes of two brain waves at different frequencies - cross-frequency amplitude-amplitude coupling (AAC). In this short communication, we describe how nonstationary, nonlinear oscillatory signals may produce spurious cross-frequency AAC. Utilizing the empirical mode decomposition, we also propose a new method for assessment of AAC that can potentially reduce the effects of nonlinearity and nonstationarity and, thus, help to avoid the detection of artificial AACs. We compare the performances of this new method and the traditional Fourier-based AAC method. We also discuss the strategies to identify potential spurious AACs.

  7. An Empirical Study of the Influence of the Concept of "Job-Hunting" on Graduates' Employment

    ERIC Educational Resources Information Center

    Chen, Chengwen; Hu, Guiying

    2008-01-01

    The concept of job-hunting is an important factor affecting university students' employment. This empirical study shows that while hunting for a job, graduates witness negative correlation between their expectation of the nature of work and the demand for occupational types and the accessibility to a post and monthly income; positive correlation…

  8. Solar-terrestrial predictions proceedings. Volume 4: Prediction of terrestrial effects of solar activity

    NASA Technical Reports Server (NTRS)

    Donnelly, R. E. (Editor)

    1980-01-01

    Papers about prediction of ionospheric and radio propagation conditions based primarily on empirical or statistical relations is discussed. Predictions of sporadic E, spread F, and scintillations generally involve statistical or empirical predictions. The correlation between solar-activity and terrestrial seismic activity and the possible relation between solar activity and biological effects is discussed.

  9. Rocksalt or cesium chloride: Investigating the relative stability of the cesium halide structures with random phase approximation based methods

    NASA Astrophysics Data System (ADS)

    Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.

    2018-03-01

    The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.

  10. The External Performance Appraisal of China Energy Regulation: An Empirical Study Using a TOPSIS Method Based on Entropy Weight and Mahalanobis Distance.

    PubMed

    Wang, Zheng-Xin; Li, Dan-Dan; Zheng, Hong-Hao

    2018-01-30

    In China's industrialization process, the effective regulation of energy and environment can promote the positive externality of energy consumption while reducing negative externality, which is an important means for realizing the sustainable development of an economic society. The study puts forward an improved technique for order preference by similarity to an ideal solution based on entropy weight and Mahalanobis distance (briefly referred as E-M-TOPSIS). The performance of the approach was verified to be satisfactory. By separately using traditional and improved TOPSIS methods, the study carried out the empirical appraisals on the external performance of China's energy regulation during 1999~2015. The results show that the correlation between the performance indexes causes the significant difference between the appraisal results of E-M-TOPSIS and traditional TOPSIS. The E-M-TOPSIS takes the correlation between indexes into account and generally softens the closeness degree compared with traditional TOPSIS. Moreover, it makes the relative closeness degree fluctuate within a small-amplitude. The results conform to the practical condition of China's energy regulation and therefore the E-M-TOPSIS is favorably applicable for the external performance appraisal of energy regulation. Additionally, the external economic performance and social responsibility performance (including environmental and energy safety performances) based on the E-M-TOPSIS exhibit significantly different fluctuation trends. The external economic performance dramatically fluctuates with a larger fluctuation amplitude, while the social responsibility performance exhibits a relatively stable interval fluctuation. This indicates that compared to the social responsibility performance, the fluctuation of external economic performance is more sensitive to energy regulation.

  11. Minimum spanning tree filtering of correlations for varying time scales and size of fluctuations

    NASA Astrophysics Data System (ADS)

    Kwapień, Jarosław; Oświecimka, Paweł; Forczek, Marcin; DroŻdŻ, Stanisław

    2017-05-01

    Based on a recently proposed q -dependent detrended cross-correlation coefficient, ρq [J. Kwapień, P. Oświęcimka, and S. Drożdż, Phys. Rev. E 92, 052815 (2015), 10.1103/PhysRevE.92.052815], we generalize the concept of the minimum spanning tree (MST) by introducing a family of q -dependent minimum spanning trees (q MST s ) that are selective to cross-correlations between different fluctuation amplitudes and different time scales of multivariate data. They inherit this ability directly from the coefficients ρq, which are processed here to construct a distance matrix being the input to the MST-constructing Kruskal's algorithm. The conventional MST with detrending corresponds in this context to q =2 . In order to illustrate their performance, we apply the q MSTs to sample empirical data from the American stock market and discuss the results. We show that the q MST graphs can complement ρq in disentangling "hidden" correlations that cannot be observed in the MST graphs based on ρDCCA, and therefore, they can be useful in many areas where the multivariate cross-correlations are of interest. As an example, we apply this method to empirical data from the stock market and show that by constructing the q MSTs for a spectrum of q values we obtain more information about the correlation structure of the data than by using q =2 only. More specifically, we show that two sets of signals that differ from each other statistically can give comparable trees for q =2 , while only by using the trees for q ≠2 do we become able to distinguish between these sets. We also show that a family of q MSTs for a range of q expresses the diversity of correlations in a manner resembling the multifractal analysis, where one computes a spectrum of the generalized fractal dimensions, the generalized Hurst exponents, or the multifractal singularity spectra: the more diverse the correlations are, the more variable the tree topology is for different q 's. As regards the correlation structure of the stock market, our analysis exhibits that the stocks belonging to the same or similar industrial sectors are correlated via the fluctuations of moderate amplitudes, while the largest fluctuations often happen to synchronize in those stocks that do not necessarily belong to the same industry.

  12. Non-Normality and Testing that a Correlation Equals Zero

    ERIC Educational Resources Information Center

    Levy, Kenneth J.

    1977-01-01

    The importance of the assumption of normality for testing that a bivariate normal correlation equals zero is examined. Both empirical and theoretical evidence suggest that such tests are robust with respect to violation of the normality assumption. (Author/JKS)

  13. Cross-sample entropy of foreign exchange time series

    NASA Astrophysics Data System (ADS)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  14. Spatio-temporal correlations in models of collective motion ruled by different dynamical laws.

    PubMed

    Cavagna, Andrea; Conti, Daniele; Giardina, Irene; Grigera, Tomas S; Melillo, Stefania; Viale, Massimiliano

    2016-11-15

    Information transfer is an essential factor in determining the robustness of biological systems with distributed control. The most direct way to study the mechanisms ruling information transfer is to experimentally observe the propagation across the system of a signal triggered by some perturbation. However, this method may be inefficient for experiments in the field, as the possibilities to perturb the system are limited and empirical observations must rely on natural events. An alternative approach is to use spatio-temporal correlations to probe the information transfer mechanism directly from the spontaneous fluctuations of the system, without the need to have an actual propagating signal on record. Here we test this method on models of collective behaviour in their deeply ordered phase by using ground truth data provided by numerical simulations in three dimensions. We compare two models characterized by very different dynamical equations and information transfer mechanisms: the classic Vicsek model, describing an overdamped noninertial dynamics and the inertial spin model, characterized by an underdamped inertial dynamics. By using dynamic finite-size scaling, we show that spatio-temporal correlations are able to distinguish unambiguously the diffusive information transfer mechanism of the Vicsek model from the linear mechanism of the inertial spin model.

  15. Measuring and modeling correlations in multiplex networks.

    PubMed

    Nicosia, Vincenzo; Latora, Vito

    2015-09-01

    The interactions among the elementary components of many complex systems can be qualitatively different. Such systems are therefore naturally described in terms of multiplex or multilayer networks, i.e., networks where each layer stands for a different type of interaction between the same set of nodes. There is today a growing interest in understanding when and why a description in terms of a multiplex network is necessary and more informative than a single-layer projection. Here we contribute to this debate by presenting a comprehensive study of correlations in multiplex networks. Correlations in node properties, especially degree-degree correlations, have been thoroughly studied in single-layer networks. Here we extend this idea to investigate and characterize correlations between the different layers of a multiplex network. Such correlations are intrinsically multiplex, and we first study them empirically by constructing and analyzing several multiplex networks from the real world. In particular, we introduce various measures to characterize correlations in the activity of the nodes and in their degree at the different layers and between activities and degrees. We show that real-world networks exhibit indeed nontrivial multiplex correlations. For instance, we find cases where two layers of the same multiplex network are positively correlated in terms of node degrees, while other two layers are negatively correlated. We then focus on constructing synthetic multiplex networks, proposing a series of models to reproduce the correlations observed empirically and/or to assess their relevance.

  16. This Ad is for You: Targeting and the Effect of Alcohol Advertising on Youth Drinking.

    PubMed

    Molloy, Eamon

    2016-02-01

    Endogenous targeting of alcohol advertisements presents a challenge for empirically identifying a causal effect of advertising on drinking. Drinkers prefer a particular media; firms recognize this and target alcohol advertising at these media. This paper overcomes this challenge by utilizing novel data with detailed individual measures of media viewing and alcohol consumption and three separate empirical techniques, which represent significant improvements over previous methods. First, controls for the average audience characteristics of the media an individual views account for attributes of magazines and television programs alcohol firms may consider when deciding where to target advertising. A second specification directly controls for each television program and magazine a person views. The third method exploits variation in advertising exposure due to a 2003 change in an industry-wide rule that governs where firms may advertise. Although the unconditional correlation between advertising and drinking by youth (ages 18-24) is strong, models that include simple controls for targeting imply, at most, a modest advertising effect. Although the coefficients are estimated less precisely, estimates with models including more rigorous controls for targeting indicate no significant effect of advertising on youth drinking. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Are relationships between pollen-ovule ratio and pollen and seed size explained by sex allocation?

    PubMed

    Burd, Martin

    2011-10-01

    Positive correlations between pollen-ovule ratio and seed size, and negative correlations between pollen-ovule ratio and pollen grain size have been noted frequently in a wide variety of angiosperm taxa. These relationships are commonly explained as a consequence of sex allocation on the basis of a simple model proposed by Charnov. Indeed, the theoretical expectation from the model has been the basis for interest in the empirical pattern. However, the predicted relationship is a necessary consequence of the mathematics of the model, which therefore has little explanatory power, even though its predictions are consistent with empirical results. The evolution of pollen-ovule ratios is likely to depend on selective factors affecting mating system, pollen presentation and dispensing, patterns of pollen receipt, pollen tube competition, female mate choice through embryo abortion, as well as genetic covariances among pollen, ovule, and seed size and other reproductive traits. To the extent the empirical correlations involving pollen-ovule ratios are interesting, they will need explanation in terms of a suite of selective factors. They are not explained simply by sex allocation trade-offs. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  18. Comparison of the Various Methodologies Used in Studying Runoff and Sediment Load in the Yellow River Basin

    NASA Astrophysics Data System (ADS)

    Xu, M., III; Liu, X.

    2017-12-01

    In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.

  19. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.

  20. Improvement of the Correlative AFM and ToF-SIMS Approach Using an Empirical Sputter Model for 3D Chemical Characterization.

    PubMed

    Terlier, T; Lee, J; Lee, K; Lee, Y

    2018-02-06

    Technological progress has spurred the development of increasingly sophisticated analytical devices. The full characterization of structures in terms of sample volume and composition is now highly complex. Here, a highly improved solution for 3D characterization of samples, based on an advanced method for 3D data correction, is proposed. Traditionally, secondary ion mass spectrometry (SIMS) provides the chemical distribution of sample surfaces. Combining successive sputtering with 2D surface projections enables a 3D volume rendering to be generated. However, surface topography can distort the volume rendering by necessitating the projection of a nonflat surface onto a planar image. Moreover, the sputtering is highly dependent on the probed material. Local variation of composition affects the sputter yield and the beam-induced roughness, which in turn alters the 3D render. To circumvent these drawbacks, the correlation of atomic force microscopy (AFM) with SIMS has been proposed in previous studies as a solution for the 3D chemical characterization. To extend the applicability of this approach, we have developed a methodology using AFM-time-of-flight (ToF)-SIMS combined with an empirical sputter model, "dynamic-model-based volume correction", to universally correct 3D structures. First, the simulation of 3D structures highlighted the great advantages of this new approach compared with classical methods. Then, we explored the applicability of this new correction to two types of samples, a patterned metallic multilayer and a diblock copolymer film presenting surface asperities. In both cases, the dynamic-model-based volume correction produced an accurate 3D reconstruction of the sample volume and composition. The combination of AFM-SIMS with the dynamic-model-based volume correction improves the understanding of the surface characteristics. Beyond the useful 3D chemical information provided by dynamic-model-based volume correction, the approach permits us to enhance the correlation of chemical information from spectroscopic techniques with the physical properties obtained by AFM.

  1. Epicenter Location of Regional Seismic Events Using Love Wave and Rayleigh Wave Ambient Seismic Noise Green's Functions

    NASA Astrophysics Data System (ADS)

    Levshin, A. L.; Barmin, M. P.; Moschetti, M. P.; Mendoza, C.; Ritzwoller, M. H.

    2011-12-01

    We describe a novel method to locate regional seismic events based on exploiting Empirical Green's Functions (EGF) that are produced from ambient seismic noise. Elastic EGFs between pairs of seismic stations are determined by cross-correlating long time-series of ambient noise recorded at the two stations. The EGFs principally contain Rayleigh waves on the vertical-vertical cross-correlations and Love waves on the transverse-transverse cross-correlations. Earlier work (Barmin et al., "Epicentral location based on Rayleigh wave empirical Green's functions from ambient seismic noise", Geophys. J. Int., 2011) showed that group time delays observed on Rayleigh wave EGFs can be exploited to locate to within about 1 km moderate sized earthquakes using USArray Transportable Array (TA) stations. The principal advantage of the method is that the ambient noise EGFs are affected by lateral variations in structure similarly to the earthquake signals, so the location is largely unbiased by 3-D structure. However, locations based on Rayleigh waves alone may be biased by more than 1 km if the earthquake depth is unknown but lies between 2 km and 7 km. This presentation is motivated by the fact that group time delays for Love waves are much less affected by earthquake depth than Rayleigh waves; thus exploitation of Love wave EGFs may reduce location bias caused by uncertainty in event depth. The advantage of Love waves to locate seismic events, however, is mitigated by the fact that Love wave EGFs have a smaller SNR than Rayleigh waves. Here, we test the use of Love and Rayleigh wave EGFs between 5- and 15-sec period to locate seismic events based on the USArray TA in the western US. We focus on locating aftershocks of the 2008 M 6.0 Wells earthquake, mining blasts in Wyoming and Montana, and small earthquakes near Norman, OK and Dallas, TX, some of which may be triggered by hydrofracking or injection wells.

  2. Machine-Learning Inspired Seismic Phase Detection for Aftershocks of the 2008 MW7.9 Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Zhu, L.; Li, Z.; Li, C.; Wang, B.; Chen, Z.; McClellan, J. H.; Peng, Z.

    2017-12-01

    Spatial-temporal evolution of aftershocks is important for illumination of earthquake physics and for rapid response of devastative earthquakes. To improve aftershock catalogs of the 2008 MW7.9 Wenchuan earthquake in Sichuan, China, Alibaba cloud and China Earthquake Administration jointly launched a seismological contest in May 2017 [Fang et al., 2017]. This abstract describes how we handle this problem in this competition. We first used Short-Term Average/Long-Term Average (STA/LTA) and Kurtosis function to obtain over 55000 candidate phase picks (P or S). Based on Signal to Noise Ratio (SNR), about 40000 phases (P or S) are selected. So far, these 40000 phases have a hit rate of 40% among the manually picks. The causes include that 1) there exist false picks (neither P nor S); 2) some P and S arrivals are mis-labeled. To improve our results, we correlate the 40000 phases over continuous waveforms to obtain the phases missed by during the first pass. This results in 120,000 events. After constructing an affinity matrix based on the cross-correlation for newly detected phases, subspace clustering methods [Vidal 2011] are applied to group those phases into separated subspaces. Initial results show good agreement between empirical and clustered labels of P phases. Half of the empirical S phases are clustered into the P phase cluster. This may be a combined effect of 1) mislabeling isolated P phases to S phases and 2) clustering errors due to a small incomplete sample pool. Phases that were falsely detected in the initial results can be also teased out. To better characterize P and S phases, our next step is to apply subspace clustering methods directly to the waveforms, instead of using the cross-correlation coefficients of detected phases. After that, supervised learning, e.g., a convolutional neural network, can be employed to improve the pick accuracy. Updated results will be presented at the meeting.

  3. Agent-based model with asymmetric trading and herding for complex financial systems.

    PubMed

    Chen, Jun-Jie; Zheng, Bo; Tan, Lei

    2013-01-01

    For complex financial systems, the negative and positive return-volatility correlations, i.e., the so-called leverage and anti-leverage effects, are particularly important for the understanding of the price dynamics. However, the microscopic origination of the leverage and anti-leverage effects is still not understood, and how to produce these effects in agent-based modeling remains open. On the other hand, in constructing microscopic models, it is a promising conception to determine model parameters from empirical data rather than from statistical fitting of the results. To study the microscopic origination of the return-volatility correlation in financial systems, we take into account the individual and collective behaviors of investors in real markets, and construct an agent-based model. The agents are linked with each other and trade in groups, and particularly, two novel microscopic mechanisms, i.e., investors' asymmetric trading and herding in bull and bear markets, are introduced. Further, we propose effective methods to determine the key parameters in our model from historical market data. With the model parameters determined for six representative stock-market indices in the world, respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. At the same time, our model produces other features of the real markets, such as the fat-tail distribution of returns and the long-term correlation of volatilities. We reveal that for the leverage and anti-leverage effects, both the investors' asymmetric trading and herding are essential generation mechanisms. Among the six markets, however, the investors' trading is approximately symmetric for the five markets which exhibit the leverage effect, thus contributing very little. These two microscopic mechanisms and the methods for the determination of the key parameters can be applied to other complex systems with similar asymmetries.

  4. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Templeton, D C; Harris, D B

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combinedmore » with conventional methods significantly improves the network detection ability in an efficient matter.« less

  5. Measurement versus prediction in the construction of patient-reported outcome questionnaires: can we have our cake and eat it?

    PubMed

    Smits, Niels; van der Ark, L Andries; Conijn, Judith M

    2017-11-02

    Two important goals when using questionnaires are (a) measurement: the questionnaire is constructed to assign numerical values that accurately represent the test taker's attribute, and (b) prediction: the questionnaire is constructed to give an accurate forecast of an external criterion. Construction methods aimed at measurement prescribe that items should be reliable. In practice, this leads to questionnaires with high inter-item correlations. By contrast, construction methods aimed at prediction typically prescribe that items have a high correlation with the criterion and low inter-item correlations. The latter approach has often been said to produce a paradox concerning the relation between reliability and validity [1-3], because it is often assumed that good measurement is a prerequisite of good prediction. To answer four questions: (1) Why are measurement-based methods suboptimal for questionnaires that are used for prediction? (2) How should one construct a questionnaire that is used for prediction? (3) Do questionnaire-construction methods that optimize measurement and prediction lead to the selection of different items in the questionnaire? (4) Is it possible to construct a questionnaire that can be used for both measurement and prediction? An empirical data set consisting of scores of 242 respondents on questionnaire items measuring mental health is used to select items by means of two methods: a method that optimizes the predictive value of the scale (i.e., forecast a clinical diagnosis), and a method that optimizes the reliability of the scale. We show that for the two scales different sets of items are selected and that a scale constructed to meet the one goal does not show optimal performance with reference to the other goal. The answers are as follows: (1) Because measurement-based methods tend to maximize inter-item correlations by which predictive validity reduces. (2) Through selecting items that correlate highly with the criterion and lowly with the remaining items. (3) Yes, these methods may lead to different item selections. (4) For a single questionnaire: Yes, but it is problematic because reliability cannot be estimated accurately. For a test battery: Yes, but it is very costly. Implications for the construction of patient-reported outcome questionnaires are discussed.

  6. ESTIMATION OF CHEMICAL TOXICITY TO WILDLIFE SPECIES USING INTERSPECIES CORRELATION MODELS

    EPA Science Inventory

    Ecological risks to wildlife are typically assessed using toxicity data for relataively few species and with limited understanding of differences in species sensitivity to contaminants. Empirical interspecies correlation models were derived from LD50 values for 49 wildlife speci...

  7. Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects

    NASA Astrophysics Data System (ADS)

    Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca

    2018-02-01

    Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.

  8. A New Ensemble Canonical Correlation Prediction Scheme for Seasonal Precipitation

    NASA Technical Reports Server (NTRS)

    Kim, Kyu-Myong; Lau, William K. M.; Li, Guilong; Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Department of Mathematical Sciences, University of Alberta, Edmonton, Canada This paper describes the fundamental theory of the ensemble canonical correlation (ECC) algorithm for the seasonal climate forecasting. The algorithm is a statistical regression sch eme based on maximal correlation between the predictor and predictand. The prediction error is estimated by a spectral method using the basis of empirical orthogonal functions. The ECC algorithm treats the predictors and predictands as continuous fields and is an improvement from the traditional canonical correlation prediction. The improvements include the use of area-factor, estimation of prediction error, and the optimal ensemble of multiple forecasts. The ECC is applied to the seasonal forecasting over various parts of the world. The example presented here is for the North America precipitation. The predictor is the sea surface temperature (SST) from different ocean basins. The Climate Prediction Center's reconstructed SST (1951-1999) is used as the predictor's historical data. The optimally interpolated global monthly precipitation is used as the predictand?s historical data. Our forecast experiments show that the ECC algorithm renders very high skill and the optimal ensemble is very important to the high value.

  9. Summarizing slant perception with words and hands; an empirical alternative to correlations in Shaffer, McManama, Swank, Williams & Durgin (2014).

    PubMed

    Eves, Frank F

    2015-02-01

    The paper by Shaffer, McManama, Swank, Williams & Durgin (2014) uses correlations between palm-board and verbal estimates of geographical slant to argue against dissociation of the two measures. This paper reports the correlations between the verbal, visual and palm-board measures of geographical slant used by Proffitt and co-workers as a counterpoint to the analyses presented by Shaffer and colleagues. The data are for slant perception of staircases in a station (N=269), a shopping mall (N=229) and a civic square (N=109). In all three studies, modest correlations between the palm-board matches and the verbal reports were obtained. Multiple-regression analyses of potential contributors to verbal reports, however, indicated no unique association between verbal and palm-board measures. Data from three further studies (combined N=528) also show no evidence of any relationship. Shared method variance between visual and palm-board matches could account for the modest association between palm-boards and verbal reports. Copyright © 2015. Published by Elsevier B.V.

  10. On the Time Evolution of Gamma-Ray Burst Pulses: A Self-Consistent Description.

    PubMed

    Ryde; Svensson

    2000-01-20

    For the first time, the consequences of combining two well-established empirical relations that describe different aspects of the spectral evolution of observed gamma-ray burst (GRB) pulses are explored. These empirical relations are (1) the hardness-intensity correlation and (2) the hardness-photon fluence correlation. From these we find a self-consistent, quantitative, and compact description for the temporal evolution of pulse decay phases within a GRB light curve. In particular, we show that in the case in which the two empirical relations are both valid, the instantaneous photon flux (intensity) must behave as 1&solm0;&parl0;1+t&solm0;tau&parr0;, where tau is a time constant that can be expressed in terms of the parameters of the two empirical relations. The time evolution is fully defined by two initial constants and two parameters. We study a complete sample of 83 bright GRB pulses observed by the Compton Gamma-Ray Observatory and identify a major subgroup of GRB pulses ( approximately 45%) which satisfy the spectral-temporal behavior described above. In particular, the decay phase follows a reciprocal law in time. It is unclear what physics causes such a decay phase.

  11. An Empirical Research on the Correlation between Human Capital and Career Success of Knowledge Workers in Enterprise

    NASA Astrophysics Data System (ADS)

    Guo, Wenchen; Xiao, Hongjun; Yang, Xi

    Human capital plays an important part in employability of knowledge workers, also it is the important intangible assets of company. This paper explores the correlation between human capital and career success of knowledge workers. Based on literature retrieval, we identified measuring tool of career success and modified further; measuring human capital with self-developed scale of high reliability and validity. After exploratory factor analysis, we suggest that human capital contents four dimensions, including education, work experience, learning ability and training; career success contents three dimensions, including perceived internal competitiveness of organization, perceived external competitiveness of organization and career satisfaction. The result of empirical analysis indicates that there is a positive correlation between human capital and career success, and human capital is an excellent predictor of career success beyond demographics variables.

  12. Daily Streamflow Predictions in an Ungauged Watershed in Northern California Using the Precipitation-Runoff Modeling System (PRMS): Calibration Challenges when nearby Gauged Watersheds are Hydrologically Dissimilar

    NASA Astrophysics Data System (ADS)

    Dhakal, A. S.; Adera, S.

    2017-12-01

    Accurate daily streamflow prediction in ungauged watersheds with sparse information is challenging. The ability of a hydrologic model calibrated using nearby gauged watersheds to predict streamflow accurately depends on hydrologic similarities between the gauged and ungauged watersheds. This study examines daily streamflow predictions using the Precipitation-Runoff Modeling System (PRMS) for the largely ungauged San Antonio Creek watershed, a 96 km2 sub-watershed of the Alameda Creek watershed in Northern California. The process-based PRMS model is being used to improve the accuracy of recent San Antonio Creek streamflow predictions generated by two empirical methods. Although San Antonio Creek watershed is largely ungauged, daily streamflow data exists for hydrologic years (HY) 1913 - 1930. PRMS was calibrated for HY 1913 - 1930 using streamflow data, modern-day land use and PRISM precipitation distribution, and gauged precipitation and temperature data from a nearby watershed. The PRMS model was then used to generate daily streamflows for HY 1996-2013, during which the watershed was ungauged, and hydrologic responses were compared to two nearby gauged sub-watersheds of Alameda Creek. Finally, the PRMS-predicted daily flows between HY 1996-2013 were compared to the two empirically-predicted streamflow time series: (1) the reservoir mass balance method and (2) correlation of historical streamflows from 80 - 100 years ago between San Antonio Creek and a nearby sub-watershed located in Alameda Creek. While the mass balance approach using reservoir storage and transfers is helpful for estimating inflows to the reservoir, large discrepancies in daily streamflow estimation can arise. Similarly, correlation-based predicted daily flows which rely on a relationship from flows collected 80-100 years ago may not represent current watershed hydrologic conditions. This study aims to develop a method of streamflow prediction in the San Antonio Creek watershed by examining PRMS's model outputs as well as empirically generated flow data for their use in water resources management decisions. PRMS is also being used to better understand the streamflow patterns in the San Antonio Creek watershed for a variety of antecedent soil moisture conditions as the creek is generally dry between late Spring and early Fall.

  13. Using beta coefficients to impute missing correlations in meta-analysis research: Reasons for caution.

    PubMed

    Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Bobko, Philip

    2018-06-01

    Meta-analysis has become a well-accepted method for synthesizing empirical research about a given phenomenon. Many meta-analyses focus on synthesizing correlations across primary studies, but some primary studies do not report correlations. Peterson and Brown (2005) suggested that researchers could use standardized regression weights (i.e., beta coefficients) to impute missing correlations. Indeed, their beta estimation procedures (BEPs) have been used in meta-analyses in a wide variety of fields. In this study, the authors evaluated the accuracy of BEPs in meta-analysis. We first examined how use of BEPs might affect results from a published meta-analysis. We then developed a series of Monte Carlo simulations that systematically compared the use of existing correlations (that were not missing) to data sets that incorporated BEPs (that impute missing correlations from corresponding beta coefficients). These simulations estimated ρ̄ (mean population correlation) and SDρ (true standard deviation) across a variety of meta-analytic conditions. Results from both the existing meta-analysis and the Monte Carlo simulations revealed that BEPs were associated with potentially large biases when estimating ρ̄ and even larger biases when estimating SDρ. Using only existing correlations often substantially outperformed use of BEPs and virtually never performed worse than BEPs. Overall, the authors urge a return to the standard practice of using only existing correlations in meta-analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Within-individual correlations reveal link between a behavioral syndrome, condition and cortisol in free-ranging Belding's ground squirrels

    PubMed Central

    Brooks, Katherine C.; Mateo, Jill. M.

    2014-01-01

    Animals often exhibit consistent individual differences in behavior (i.e. animal personality) and correlations between behaviors (i.e. behavioral syndromes), yet the causes of those patterns of behavioral variation remain insufficiently understood. Many authors hypothesize that state-dependent behavior produces animal personality and behavioral syndromes. However, empirical studies assessing patterns of covariation among behavioral traits and state variables have produced mixed results. New statistical methods that partition correlations into between-individual and residual within-individual correlations offer an opportunity to more sufficiently quantify relationships among behaviors and state variables to assess hypotheses of animal personality and behavioral syndromes. In a population of wild Belding's ground squirrels (Urocitellus beldingi) we repeatedly measured activity, exploration, and response to restraint behaviors alongside glucocorticoids and nutritional condition. We used multivariate mixed models to determine whether between-individual or within-individual correlations drive phenotypic relationships among traits. Squirrels had consistent individual differences for all five traits. At the between-individual level, activity and exploration were positively correlated whereas both traits negatively correlated with response to restraint, demonstrating a behavioral syndrome. At the within-individual level, condition negatively correlated with cortisol, activity and exploration. Importantly, this indicates that although behavior is state-dependent, which may play a role in animal personality and behavioral syndromes, feedback mechanisms between condition and behavior appear not to produce consistent individual differences in behavior and correlations between them. PMID:25598565

  15. A meta-analysis of factors affecting trust in human-robot interaction.

    PubMed

    Hancock, Peter A; Billings, Deborah R; Schaefer, Kristin E; Chen, Jessie Y C; de Visser, Ewart J; Parasuraman, Raja

    2011-10-01

    We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.

  16. The AFDD International Dynamic Stall Workshop on Correlation of Dynamic Stall Models with 3-D Dynamic Stall Data

    NASA Technical Reports Server (NTRS)

    Tan, C. M.; Carr, L. W.

    1996-01-01

    A variety of empirical and computational fluid dynamics two-dimensional (2-D) dynamic stall models were compared to recently obtained three-dimensional (3-D) dynamic stall data in a workshop on modeling of 3-D dynamic stall of an unswept, rectangular wing, of aspect ratio 10. Dynamic stall test data both below and above the static stall angle-of-attack were supplied to the participants, along with a 'blind' case where only the test conditions were supplied in advance, with results being compared to experimental data at the workshop itself. Detailed graphical comparisons are presented in the report, which also includes discussion of the methods and the results. The primary conclusion of the workshop was that the 3-D effects of dynamic stall on the oscillating wing studied in the workshop can be reasonably reproduced by existing semi-empirical models once 2-D dynamic stall data have been obtained. The participants also emphasized the need for improved quantification of 2-D dynamic stall.

  17. Domain walls and ferroelectric reversal in corundum derivatives

    NASA Astrophysics Data System (ADS)

    Ye, Meng; Vanderbilt, David

    2017-01-01

    Domain walls are the topological defects that mediate polarization reversal in ferroelectrics, and they may exhibit quite different geometric and electronic structures compared to the bulk. Therefore, a detailed atomic-scale understanding of the static and dynamic properties of domain walls is of pressing interest. In this work, we use first-principles methods to study the structures of 180∘ domain walls, both in their relaxed state and along the ferroelectric reversal pathway, in ferroelectrics belonging to the family of corundum derivatives. Our calculations predict their orientation, formation energy, and migration energy and also identify important couplings between polarization, magnetization, and chirality at the domain walls. Finally, we point out a strong empirical correlation between the height of the domain-wall-mediated polarization reversal barrier and the local bonding environment of the mobile A cations as measured by bond-valence sums. Our results thus provide both theoretical and empirical guidance for future searches for ferroelectric candidates in materials of the corundum derivative family.

  18. Domain walls and ferroelectric reversal in corundum derivatives

    NASA Astrophysics Data System (ADS)

    Ye, Meng; Vanderbilt, David

    Domain walls are the topological defects that mediate polarization reversal in ferroelectrics, and they may exhibit quite different geometric and electronic structures compared to the bulk. Therefore, a detailed atomic-scale understanding of the static and dynamic properties of domain walls is of pressing interest. In this work, we use first-principles methods to study the structures of 180° domain walls, both in their relaxed state and along the ferroelectric reversal pathway, in ferroelectrics belonging to the family of corundum derivatives. Our calculations predict their orientation, formation energy, and migration energy, and also identify important couplings between polarization, magnetization, and chirality at the domain walls. Finally, we point out a strong empirical correlation between the height of the domain-wall mediated polarization reversal barrier and the local bonding environment of the mobile A cations as measured by bond valence sums. Our results thus provide both theoretical and empirical guidance to further search for ferroelectric candidates in materials of the corundum derivative family. The work is supported by ONR Grant N00014-12-1-1035.

  19. Feasibility of quasi-random band model in evaluating atmospheric radiance

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Mirakhur, N.

    1980-01-01

    The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.

  20. Robust Visual Tracking Revisited: From Correlation Filter to Template Matching.

    PubMed

    Liu, Fanghui; Gong, Chen; Huang, Xiaolin; Zhou, Tao; Yang, Jie; Tao, Dacheng

    2018-06-01

    In this paper, we propose a novel matching based tracker by investigating the relationship between template matching and the recent popular correlation filter based trackers (CFTs). Compared to the correlation operation in CFTs, a sophisticated similarity metric termed mutual buddies similarity is proposed to exploit the relationship of multiple reciprocal nearest neighbors for target matching. By doing so, our tracker obtains powerful discriminative ability on distinguishing target and background as demonstrated by both empirical and theoretical analyses. Besides, instead of utilizing single template with the improper updating scheme in CFTs, we design a novel online template updating strategy named memory, which aims to select a certain amount of representative and reliable tracking results in history to construct the current stable and expressive template set. This scheme is beneficial for the proposed tracker to comprehensively understand the target appearance variations, recall some stable results. Both qualitative and quantitative evaluations on two benchmarks suggest that the proposed tracking method performs favorably against some recently developed CFTs and other competitive trackers.

  1. Efficiency and cross-correlation in equity market during global financial crisis: Evidence from China

    NASA Astrophysics Data System (ADS)

    Ma, Pengcheng; Li, Daye; Li, Shuo

    2016-02-01

    Using one minute high-frequency data of the Shanghai Composite Index (SHCI) and the Shenzhen Composite Index (SZCI) (2007-2008), we employ the detrended fluctuation analysis (DFA) and the detrended cross correlation analysis (DCCA) with rolling window approach to observe the evolution of market efficiency and cross-correlation in pre-crisis and crisis period. Considering the fat-tail distribution of return time series, statistical test based on shuffling method is conducted to verify the null hypothesis of no long-term dependence. Our empirical research displays three main findings. First Shanghai equity market efficiency deteriorated while Shenzhen equity market efficiency improved with the advent of financial crisis. Second the highly positive dependence between SHCI and SZCI varies with time scale. Third financial crisis saw a significant increase of dependence between SHCI and SZCI at shorter time scales but a lack of significant change at longer time scales, providing evidence of contagion and absence of interdependence during crisis.

  2. Random versus maximum entropy models of neural population activity

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry

    2017-04-01

    The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.

  3. Computer program for calculation of real gas turbulent boundary layers with variable edge entropy

    NASA Technical Reports Server (NTRS)

    Boney, L. R.

    1974-01-01

    A user's manual for a computer program which calculates real gas turbulent boundary layers with variable edge entropy on a blunt cone or flat plate at zero angle of attack is presented. An integral method is used. The method includes the effect of real gas in thermodynamic equilibrium and variable edge entropy. A modified Crocco enthalpy velocity relationship is used for the enthalpy profiles and an empirical correlation of the N-power law profile is used for the velocity profile. The skin-friction-coefficient expressions of Spalding and Chi and Van Driest are used in the solution of the momentum equation and in the heat-transfer predictions that use several modified forms of Reynolds analogy.

  4. Electrical and fluid transport in consolidated sphere packs

    NASA Astrophysics Data System (ADS)

    Zhan, Xin; Schwartz, Lawrence M.; Toksöz, M. Nafi

    2015-05-01

    We calculate geometrical and transport properties (electrical conductivity, permeability, specific surface area, and surface conductivity) of a family of model granular porous media from an image based representation of its microstructure. The models are based on the packing described by Finney and cover a wide range of porosities. Finite difference methods are applied to solve for electrical conductivity and hydraulic permeability. Two image processing methods are used to identify the pore-grain interface and to test correlations linking permeability to electrical conductivity. A three phase conductivity model is developed to compute surface conductivity associated with the grain-pore interface. Our results compare well against empirical models over the entire porosity range studied. We conclude by examining the influence of image resolution on our calculations.

  5. Automated PET-only quantification of amyloid deposition with adaptive template and empirically pre-defined ROI

    NASA Astrophysics Data System (ADS)

    Akamatsu, G.; Ikari, Y.; Ohnishi, A.; Nishida, H.; Aita, K.; Sasaki, M.; Yamamoto, Y.; Sasaki, M.; Senda, M.

    2016-08-01

    Amyloid PET is useful for early and/or differential diagnosis of Alzheimer’s disease (AD). Quantification of amyloid deposition using PET has been employed to improve diagnosis and to monitor AD therapy, particularly in research. Although MRI is often used for segmentation of gray matter and for spatial normalization into standard Montreal Neurological Institute (MNI) space where region-of-interest (ROI) template is defined, 3D MRI is not always available in clinical practice. The purpose of this study was to examine the feasibility of PET-only amyloid quantification with an adaptive template and a pre-defined standard ROI template that has been empirically generated from typical cases. A total of 68 subjects who underwent brain 11C-PiB PET were examined. The 11C-PiB images were non-linearly spatially normalized to the standard MNI T1 atlas using the same transformation parameters of MRI-based normalization. The automatic-anatomical-labeling-ROI (AAL-ROI) template was applied to the PET images. All voxel values were normalized by the mean value of cerebellar cortex to generate the SUVR-scaled images. Eleven typical positive images and eight typical negative images were normalized and averaged, respectively, and were used as the positive and negative template. Positive and negative masks which consist of voxels with SUVR  ⩾1.7 were extracted from both templates. Empirical PiB-prone ROI (EPP-ROI) was generated by subtracting the negative mask from the positive mask. The 11C-PiB image of each subject was non-rigidly normalized to the positive and negative template, respectively, and the one with higher cross-correlation was adopted. The EPP-ROI was then inversely transformed to individual PET images. We evaluated differences of SUVR between standard MRI-based method and PET-only method. We additionally evaluated whether the PET-only method would correctly categorize 11C-PiB scans as positive or negative. Significant correlation was observed between the SUVRs obtained with AAL-ROI and those with EPP-ROI when MRI-based normalization was used, the latter providing higher SUVR. When EPP-ROI was used, MRI-based method and PET-only method provided almost identical SUVR. All 11C-PiB scans were correctly categorized into positive and negative using a cutoff value of 1.7 as compared to visual interpretation. The 11C-PiB SUVR were 2.30  ±  0.24 and 1.25  ±  0.11 for the positive and negative images. PET-only amyloid quantification method with adaptive templates and EPP-ROI can provide accurate, robust and simple amyloid quantification without MRI.

  6. Testing for measurement invariance and latent mean differences across methods: interesting incremental information from multitrait-multimethod studies

    PubMed Central

    Geiser, Christian; Burns, G. Leonard; Servera, Mateu

    2014-01-01

    Models of confirmatory factor analysis (CFA) are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM) investigations. We show that interesting incremental information about method effects can be gained from including mean structures and tests of MI across methods in MTMM models. We present a modeling framework for testing MI in the first step of a CFA-MTMM analysis. We also discuss the relevance of MI in the context of four more complex CFA-MTMM models with method factors. We focus on three recently developed multiple-indicator CFA-MTMM models for structurally different methods [the correlated traits-correlated (methods – 1), latent difference, and latent means models; Geiser et al., 2014a; Pohl and Steyer, 2010; Pohl et al., 2008] and one model for interchangeable methods (Eid et al., 2008). We demonstrate that some of these models require or imply MI by definition for a proper interpretation of trait or method factors, whereas others do not, and explain why MI may or may not be required in each model. We show that in the model for interchangeable methods, testing for MI is critical for determining whether methods can truly be seen as interchangeable. We illustrate the theoretical issues in an empirical application to an MTMM study of attention deficit and hyperactivity disorder (ADHD) with mother, father, and teacher ratings as methods. PMID:25400603

  7. Quantifying the process and outcomes of person-centered planning.

    PubMed

    Holburn, S; Jacobson, J W; Vietze, P M; Schwartz, A A; Sersen, E

    2000-09-01

    Although person-centered planning is a popular approach in the field of developmental disabilities, there has been little systematic assessment of its process and outcomes. To measure person-centered planning, we developed three instruments designed to assess its various aspects. We then constructed variables comprising both a Process and an Outcome Index using a combined rational-empirical method. Test-retest reliability and measures of internal consistency appeared adequate. Variable correlations and factor analysis were generally consistent with our conceptualization and resulting item and variable classifications. Practical implications for intervention integrity, program evaluation, and organizational performance are discussed.

  8. Emergence and temporal structure of Lead-Lag correlations in collective stock dynamics

    NASA Astrophysics Data System (ADS)

    Xia, Lisi; You, Daming; Jiang, Xin; Chen, Wei

    2018-07-01

    Understanding the correlations among stock returns is crucial for reducing the risk of investment in stock markets. As an important stylized correlation, lead-lag effect plays a major role in analyzing market volatility and deriving trading strategies. Here, we explore historical lead-lag relationships among stocks in the Chinese stock market. Strongly positive lagged correlations can be empirically observed. We demonstrate this lead-lag phenomenon is not constant but temporally emerges during certain periods. By introducing moving time window method, we transform the lead-lag dynamics into a series of asymmetric lagged correlation matrices. Dynamic lead-lag structures are uncovered in the form of temporal network structures. We find that the size of lead-lag group experienced a rapid drop during the year 2012, which signaled a re-balance of the stock market. On the daily timescale, we find the lead-lag structure exhibits several persistent patterns, which can be characterized by the Jaccard matrix. We show significant market events can be distinguished in the Jaccard matrix diagram. Taken together, we study an integration of all the temporal networks and identify several leading stock sectors, which are in accordance with the common Chinese economic fundamentals.

  9. Diametrical clustering for identifying anti-correlated gene clusters.

    PubMed

    Dhillon, Inderjit S; Marcotte, Edward M; Roshan, Usman

    2003-09-01

    Clustering genes based upon their expression patterns allows us to predict gene function. Most existing clustering algorithms cluster genes together when their expression patterns show high positive correlation. However, it has been observed that genes whose expression patterns are strongly anti-correlated can also be functionally similar. Biologically, this is not unintuitive-genes responding to the same stimuli, regardless of the nature of the response, are more likely to operate in the same pathways. We present a new diametrical clustering algorithm that explicitly identifies anti-correlated clusters of genes. Our algorithm proceeds by iteratively (i). re-partitioning the genes and (ii). computing the dominant singular vector of each gene cluster; each singular vector serving as the prototype of a 'diametric' cluster. We empirically show the effectiveness of the algorithm in identifying diametrical or anti-correlated clusters. Testing the algorithm on yeast cell cycle data, fibroblast gene expression data, and DNA microarray data from yeast mutants reveals that opposed cellular pathways can be discovered with this method. We present systems whose mRNA expression patterns, and likely their functions, oppose the yeast ribosome and proteosome, along with evidence for the inverse transcriptional regulation of a number of cellular systems.

  10. On the galaxy–halo connection in the EAGLE simulation

    DOE PAGES

    Desmond, Harry; Mao, Yao -Yuan; Wechsler, Risa H.; ...

    2017-06-13

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass–size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy–halo connection it implies. We find themore » EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Here by, using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.« less

  11. On the galaxy–halo connection in the EAGLE simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desmond, Harry; Mao, Yao -Yuan; Wechsler, Risa H.

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass–size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy–halo connection it implies. We find themore » EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Here by, using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.« less

  12. Empirical Correlations for the Solubility of Pressurant Gases in Cryogenic Propellants

    NASA Technical Reports Server (NTRS)

    Zimmerli, Gregory A.; Asipauskas, Marius; VanDresar, Neil T.

    2010-01-01

    We have analyzed data published by others reporting the solubility of helium in liquid hydrogen, oxygen, and methane, and of nitrogen in liquid oxygen, to develop empirical correlations for the mole fraction of these pressurant gases in the liquid phase as a function of temperature and pressure. The data, compiled and provided by NIST, are from a variety of sources and covers a large range of liquid temperatures and pressures. The correlations were developed to yield accurate estimates of the mole fraction of the pressurant gas in the cryogenic liquid at temperature and pressures of interest to the propulsion community, yet the correlations developed are applicable over a much wider range. The mole fraction solubility of helium in all these liquids is less than 0.3% at the temperatures and pressures used in propulsion systems. When nitrogen is used as a pressurant for liquid oxygen, substantial contamination can result, though the diffusion into the liquid is slow.

  13. Implication of correlations among some common stability statistics - a Monte Carlo simulations.

    PubMed

    Piepho, H P

    1995-03-01

    Stability analysis of multilocation trials is often based on a mixed two-way model. Two stability measures in frequent use are the environmental variance (S i (2) )and the ecovalence (W i). Under the two-way model the rank orders of the expected values of these two statistics are identical for a given set of genotypes. By contrast, empirical rank correlations among these measures are consistently low. This suggests that the two-way mixed model may not be appropriate for describing real data. To check this hypothesis, a Monte Carlo simulation was conducted. It revealed that the low empirical rank correlation amongS i (2) and W i is most likely due to sampling errors. It is concluded that the observed low rank correlation does not invalidate the two-way model. The paper also discusses tests for homogeneity of S i (2) as well as implications of the two-way model for the classification of stability statistics.

  14. Correlation between multispectral photography and near-surface turbidities

    NASA Technical Reports Server (NTRS)

    Wertz, D. L.; Mealor, W. T.; Steele, M. L.; Pinson, J. W.

    1976-01-01

    Four-band multispectral photography obtained from an aerial platform at an altitude of about 10,000 feet has been utilized to measure near-surface turbidity at numerous sampling sites in the Ross Barnett Reservoir, Mississippi. Correlation of the photographs with turbidity measurements has been accomplished via an empirical mathematical model which depends upon visual color recognition when the composited photographs are examined on either an I squared S model 600 or a Spectral Data model 65 color-additive viewer. The mathematical model was developed utilizing least-squares, iterative, and standard statistical methods and includes a time-dependent term related to sun angle. This model is consistent with information obtained from two overflights of the target area - July 30, 1973 and October 30, 1973 - and now is being evaluated with regard to information obtained from a third overflight on November 8, 1974.

  15. Multifractal Detrended Fluctuation Analysis of Regional Precipitation Sequences Based on the CEEMDAN-WPT

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Cheng, Chen; Fu, Qiang; Liu, Chunlei; Li, Mo; Faiz, Muhammad Abrar; Li, Tianxiao; Khan, Muhammad Imran; Cui, Song

    2018-03-01

    In this paper, the complete ensemble empirical mode decomposition with the adaptive noise (CEEMDAN) algorithm is introduced into the complexity research of precipitation systems to improve the traditional complexity measure method specific to the mode mixing of the Empirical Mode Decomposition (EMD) and incomplete decomposition of the ensemble empirical mode decomposition (EEMD). We combined the CEEMDAN with the wavelet packet transform (WPT) and multifractal detrended fluctuation analysis (MF-DFA) to create the CEEMDAN-WPT-MFDFA, and used it to measure the complexity of the monthly precipitation sequence of 12 sub-regions in Harbin, Heilongjiang Province, China. The results show that there are significant differences in the monthly precipitation complexity of each sub-region in Harbin. The complexity of the northwest area of Harbin is the lowest and its predictability is the best. The complexity and predictability of the middle and Midwest areas of Harbin are about average. The complexity of the southeast area of Harbin is higher than that of the northwest, middle, and Midwest areas of Harbin and its predictability is worse. The complexity of Shuangcheng is the highest and its predictability is the worst of all the studied sub-regions. We used terrain and human activity as factors to analyze the causes of the complexity of the local precipitation. The results showed that the correlations between the precipitation complexity and terrain are obvious, and the correlations between the precipitation complexity and human influence factors vary. The distribution of the precipitation complexity in this area may be generated by the superposition effect of human activities and natural factors such as terrain, general atmospheric circulation, land and sea location, and ocean currents. To evaluate the stability of the algorithm, the CEEMDAN-WPT-MFDFA was compared with the equal probability coarse graining LZC algorithm, fuzzy entropy, and wavelet entropy. The results show that the CEEMDAN-WPT-MFDFA was more stable than 3 contrast methods under the influence of white noise and colored noise, which proves that the CEEMDAN-WPT-MFDFA has a strong robustness under the influence of noise.

  16. Research on the method of information system risk state estimation based on clustering particle filter

    NASA Astrophysics Data System (ADS)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  17. Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.

    PubMed

    Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi

    2017-07-01

    We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Use of Principal Components Analysis and Kriging to Predict Groundwater-Sourced Rural Drinking Water Quality in Saskatchewan

    PubMed Central

    McLeod, Lianne; Bharadwaj, Lalita; Epp, Tasha; Waldner, Cheryl L.

    2017-01-01

    Groundwater drinking water supply surveillance data were accessed to summarize water quality delivered as public and private water supplies in southern Saskatchewan as part of an exposure assessment for epidemiologic analyses of associations between water quality and type 2 diabetes or cardiovascular disease. Arsenic in drinking water has been linked to a variety of chronic diseases and previous studies have identified multiple wells with arsenic above the drinking water standard of 0.01 mg/L; therefore, arsenic concentrations were of specific interest. Principal components analysis was applied to obtain principal component (PC) scores to summarize mixtures of correlated parameters identified as health standards and those identified as aesthetic objectives in the Saskatchewan Drinking Water Quality Standards and Objective. Ordinary, universal, and empirical Bayesian kriging were used to interpolate arsenic concentrations and PC scores in southern Saskatchewan, and the results were compared. Empirical Bayesian kriging performed best across all analyses, based on having the greatest number of variables for which the root mean square error was lowest. While all of the kriging methods appeared to underestimate high values of arsenic and PC scores, empirical Bayesian kriging was chosen to summarize large scale geographic trends in groundwater-sourced drinking water quality and assess exposure to mixtures of trace metals and ions. PMID:28914824

  19. Gene-Environment Interplay in Twin Models

    PubMed Central

    Hatemi, Peter K.

    2013-01-01

    In this article, we respond to Shultziner’s critique that argues that identical twins are more alike not because of genetic similarity, but because they select into more similar environments and respond to stimuli in comparable ways, and that these effects bias twin model estimates to such an extent that they are invalid. The essay further argues that the theory and methods that undergird twin models, as well as the empirical studies which rely upon them, are unaware of these potential biases. We correct this and other misunderstandings in the essay and find that gene-environment (GE) interplay is a well-articulated concept in behavior genetics and political science, operationalized as gene-environment correlation and gene-environment interaction. Both are incorporated into interpretations of the classical twin design (CTD) and estimated in numerous empirical studies through extensions of the CTD. We then conduct simulations to quantify the influence of GE interplay on estimates from the CTD. Due to the criticism’s mischaracterization of the CTD and GE interplay, combined with the absence of any empirical evidence to counter what is presented in the extant literature and this article, we conclude that the critique does not enhance our understanding of the processes that drive political traits, genetic or otherwise. PMID:24808718

  20. Use of Principal Components Analysis and Kriging to Predict Groundwater-Sourced Rural Drinking Water Quality in Saskatchewan.

    PubMed

    McLeod, Lianne; Bharadwaj, Lalita; Epp, Tasha; Waldner, Cheryl L

    2017-09-15

    Groundwater drinking water supply surveillance data were accessed to summarize water quality delivered as public and private water supplies in southern Saskatchewan as part of an exposure assessment for epidemiologic analyses of associations between water quality and type 2 diabetes or cardiovascular disease. Arsenic in drinking water has been linked to a variety of chronic diseases and previous studies have identified multiple wells with arsenic above the drinking water standard of 0.01 mg/L; therefore, arsenic concentrations were of specific interest. Principal components analysis was applied to obtain principal component (PC) scores to summarize mixtures of correlated parameters identified as health standards and those identified as aesthetic objectives in the Saskatchewan Drinking Water Quality Standards and Objective. Ordinary, universal, and empirical Bayesian kriging were used to interpolate arsenic concentrations and PC scores in southern Saskatchewan, and the results were compared. Empirical Bayesian kriging performed best across all analyses, based on having the greatest number of variables for which the root mean square error was lowest. While all of the kriging methods appeared to underestimate high values of arsenic and PC scores, empirical Bayesian kriging was chosen to summarize large scale geographic trends in groundwater-sourced drinking water quality and assess exposure to mixtures of trace metals and ions.

  1. Calculation of Host-Guest Binding Affinities Using a Quantum-Mechanical Energy Model.

    PubMed

    Muddana, Hari S; Gilson, Michael K

    2012-06-12

    The prediction of protein-ligand binding affinities is of central interest in computer-aided drug discovery, but it is still difficult to achieve a high degree of accuracy. Recent studies suggesting that available force fields may be a key source of error motivate the present study, which reports the first mining minima (M2) binding affinity calculations based on a quantum mechanical energy model, rather than an empirical force field. We apply a semi-empirical quantum-mechanical energy function, PM6-DH+, coupled with the COSMO solvation model, to 29 host-guest systems with a wide range of measured binding affinities. After correction for a systematic error, which appears to derive from the treatment of polar solvation, the computed absolute binding affinities agree well with experimental measurements, with a mean error 1.6 kcal/mol and a correlation coefficient of 0.91. These calculations also delineate the contributions of various energy components, including solute energy, configurational entropy, and solvation free energy, to the binding free energies of these host-guest complexes. Comparison with our previous calculations, which used empirical force fields, point to significant differences in both the energetic and entropic components of the binding free energy. The present study demonstrates successful combination of a quantum mechanical Hamiltonian with the M2 affinity method.

  2. Oil price and exchange rate co-movements in Asian countries: Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Hussain, Muntazir; Zebende, Gilney Figueira; Bashir, Usman; Donghong, Ding

    2017-01-01

    Most empirical literature investigates the relation between oil prices and exchange rate through different models. These models measure this relationship on two time scales (long and short terms), and often fail to observe the co-movement of these variables at different time scales. We apply a detrended cross-correlation approach (DCCA) to investigate the co-movements of the oil price and exchange rate in 12 Asian countries. This model determines the co-movements of oil price and exchange rate at different time scale. The exchange rate and oil price time series indicate unit root problem. Their correlation and cross-correlation are very difficult to measure. The result becomes spurious when periodic trend or unit root problem occurs in these time series. This approach measures the possible cross-correlation at different time scale and controlling the unit root problem. Our empirical results support the co-movements of oil prices and exchange rate. Our results support a weak negative cross-correlation between oil price and exchange rate for most Asian countries included in our sample. The results have important monetary, fiscal, inflationary, and trade policy implications for these countries.

  3. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.

  4. Semi-empirical spectrophotometric (SESp) method for the indirect determination of the ratio of cationic micellar binding constants of counterions X⁻ and Br⁻(K(X)/K(Br)).

    PubMed

    Khan, Mohammad Niyaz; Yusof, Nor Saadah Mohd; Razak, Norazizah Abdul

    2013-01-01

    The semi-empirical spectrophotometric (SESp) method, for the indirect determination of ion exchange constants (K(X)(Br)) of ion exchange processes occurring between counterions (X⁻ and Br⁻) at the cationic micellar surface, is described in this article. The method uses an anionic spectrophotometric probe molecule, N-(2-methoxyphenyl)phthalamate ion (1⁻), which measures the effects of varying concentrations of inert inorganic or organic salt (Na(v)X, v = 1, 2) on absorbance, (A(ob)) at 310 nm, of samples containing constant concentrations of 1⁻, NaOH and cationic micelles. The observed data fit satisfactorily to an empirical equation which gives the values of two empirical constants. These empirical constants lead to the determination of K(X)(Br) (= K(X)/K(Br) with K(X) and K(Br) representing cationic micellar binding constants of counterions X and Br⁻). This method gives values of K(X)(Br) for both moderately hydrophobic and hydrophilic X⁻. The values of K(X)(Br), obtained by using this method, are comparable with the corresponding values of K(X)(Br), obtained by the use of semi-empirical kinetic (SEK) method, for different moderately hydrophobic X. The values of K(X)(Br) for X = Cl⁻ and 2,6-Cl₂C6H₃CO₂⁻, obtained by the use of SESp and SEK methods, are similar to those obtained by the use of other different conventional methods.

  5. Trend extraction using empirical mode decomposition and statistical empirical mode decomposition: Case study: Kuala Lumpur stock market

    NASA Astrophysics Data System (ADS)

    Jaber, Abobaker M.

    2014-12-01

    Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.

  6. Aircraft Noise Prediction Program (ANOPP) Fan Noise Prediction for Small Engines

    NASA Technical Reports Server (NTRS)

    Hough, Joe W.; Weir, Donald S.

    1996-01-01

    The Fan Noise Module of ANOPP is used to predict the broadband noise and pure tones for axial flow compressors or fans. The module, based on the method developed by M. F. Heidmann, uses empirical functions to predict fan noise spectra as a function of frequency and polar directivity. Previous studies have determined the need to modify the module to better correlate measurements of fan noise from engines in the 3000- to 6000-pound thrust class. Additional measurements made by AlliedSignal have confirmed the need to revise the ANOPP fan noise method for smaller engines. This report describes the revisions to the fan noise method which have been verified with measured data from three separate AlliedSignal fan engines. Comparisons of the revised prediction show a significant improvement in overall and spectral noise predictions.

  7. Rate correlation for condensation of pure vapor on turbulent, subcooled liquid

    NASA Technical Reports Server (NTRS)

    Brown, J. Steven; Khoo, Boo Cheong; Sonin, Ain A.

    1990-01-01

    An empirical correlation is presented for the condensation of pure vapor on a subcooled, turbulent liquid with a shear-free interface. The correlation expresses the dependence of the condensation rate on fluid properties, on the liquid-side turbulence (which is imposed from below), and on the effects of buoyancy in the interfacial thermal layer. The correlation is derived from experiments with steam and water, but under conditions which simulate typical cryogenic fluids.

  8. Correlation of published data on the solubility of methane in H/sub 2/O-NaCl solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coco, L.T.; Johnson, A.E. Jr.; Bebout, D.G.

    1981-01-01

    A new correlation of the available published data for the solubility of methane in water was developed, based on fundamental thermodynamic relationships. An empirical relationship for the salting-out coefficient of NaCl for methane solubility in water was determined as a function of temperature. Root mean square and average deviations for the new correlation, the Haas correlation, and the revised Blount equation are compared.

  9. [Psychoanalysis and psychoanalytic oriented psychotherapy: differences and similarities].

    PubMed

    Rössler-Schülein, Hemma; Löffler-Stastka, Henriette

    2013-01-01

    Psychoanalysis as well as Psychoanalytic Psychotherapy derived from Psychoanalysis are efficient methods offered by the Austrian health care system in the treatment for anxiety, depression, personality disorders, neurotic and somatoform disorders. In both methods similar basic treatment techniques are applied. Therefore differentiation between both treatment options often is made pragmatically by the frequency of sessions or the use of the couch and seems to be vague in the light of empirical studies. This overview focuses a potential differentiation-the objective and subjective dimensions of the indication process. Concerning the latter it is to investigate, if reflective functioning and ego-integration can be enhanced in the patient during the interaction process between patient and psychoanalyst. Empirical data underline the necessity to investigate to which extent externalizing defence processes are used and to integrate such factors into the decision and indication process. Differing treatment aims display another possibility to differentiate psychoanalysis and psychoanalytic psychotherapy. Psychoanalytic psychotherapy aims for example more at circumscribed problem-foci, the capability for self-reflexion is one of the most prominent treatment effects in psychoanalysis that results in on-going symptom reduction and resilience. The most prominent differentiation lies in the utilization of technical neutrality. Within Psychoanalytic Psychotherapy neutrality has sometimes to be suspended in order to stop severe acting out. Empirical evidence is given concerning the differentiation between psychoanalysis and psychoanalytic psychotherapy, that treatment efficacy is not correlated with the duration of the treatment, but with the frequency of sessions. Results give support to the assumption that the dosage of specific and appropriate psychoanalytic techniques facilitates sustained therapeutic change.

  10. Correlations by the entrainment theory of thermodynamic effects for developed cavitation in venturis and comparisons with ogive data

    NASA Technical Reports Server (NTRS)

    Billet, M. L.; Holl, J. W.; Weir, D. S.

    1975-01-01

    A semi-empirical entrainment theory was employed to correlate the measured temperature depression, Delta T, in a developed cavity for a venturi. This theory correlates Delta t in terms of the dimensionless numbers of Nusselt, Reynolds, Froude, Weber and Peclet, and dimensionless cavity length, L/D. These correlations are then compared with similar correlations for zero and quarter caliber ogives. In addition, cavitation number data for both limited and developed cavitation in venturis are presented.

  11. Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution

    PubMed Central

    Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen

    2014-01-01

    Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804

  12. Empirical microeconomics action functionals

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Du, Xin; Tanputraman, Winson

    2015-06-01

    A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).

  13. An Empirical Investigation of the Proposition that 'School Is Work': A Comparison of Personality-Performance Correlations in School and Work Settings

    ERIC Educational Resources Information Center

    Lounsbury, John W.; Gibson, Lucy W.; Sundstrom, Eric; Wilburn, Denise; Loveland, James M.

    2004-01-01

    An empirical test of Munson and Rubenstein's (1992) assertion that 'school is work' compared a sample of students in a high school with a sample of workers in a manufacturing plant in the same metropolitan area. Data from both samples included scores on six personality traits--Conscientiousness, Agreeableness, Openness, Emotional Stability,…

  14. Internalized Heterosexism: Measurement, Psychosocial Correlates, and Research Directions

    ERIC Educational Resources Information Center

    Szymanski, Dawn M.; Kashubeck-West, Susan; Meyer, Jill

    2008-01-01

    This article provides an integrated critical review of the literature on internalized heterosexism/internalized homophobia (IH), its measurement, and its psychosocial correlates. It describes the psychometric properties of six published measures used to operationalize the construct of IH. It also critically reviews empirical studies on correlates…

  15. Empirical research in medical ethics: how conceptual accounts on normative-empirical collaboration may improve research practice.

    PubMed

    Salloch, Sabine; Schildmann, Jan; Vollmann, Jochen

    2012-04-13

    The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis.

  16. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

    PubMed Central

    2012-01-01

    Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496

  17. Depression assessment after traumatic brain injury: an empirically based classification method.

    PubMed

    Seel, Ronald T; Kreutzer, Jeffrey S

    2003-11-01

    To describe the patterns of depression in patients with traumatic brain injury (TBI), to evaluate the psychometric properties of the Neurobehavioral Functioning Inventory (NFI) Depression Scale, and to classify empirically NFI Depression Scale scores. Depressive symptoms were characterized by using the NFI Depression Scale, the Beck Depression Inventory (BDI), and the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Depression Scale. An outpatient clinic within a Traumatic Brain Injury Model Systems center. A demographically diverse sample of 172 outpatients with TBI, evaluated between 1996 and 2000. Not applicable. The NFI, BDI, and MMPI-2 Depression Scale. The Cronbach alpha, analysis of variance, Pearson correlations, and canonical discriminant function analysis were used to examine the psychometric properties of the NFI Depression Scale. Patients with TBI most frequently reported problems with frustration (81%), restlessness (73%), rumination (69%), boredom (66%), and sadness (66%) with the NFI Depression Scale. The percentages of patients classified as depressed with the BDI and the NFI Depression Scale were 37% and 30%, respectively. The Cronbach alpha for the NFI Depression Scale was.93, indicating a high degree of internal consistency. As hypothesized, NFI Depression Scale scores correlated highly with BDI (r=.765) and MMPI-2 Depression Scale T scores (r=.752). The NFI Depression Scale did not correlate significantly with the MMPI-2 Hypomania Scale, thus showing discriminant validity. Normal and clinically depressed BDI scores were most likely to be accurately predicted by the NFI Depression Scale, with 81% and 87% of grouped cases, respectively, correctly classified. Normal and depressed MMPI-2 Depression Scale scores were accurately predicted by the NFI Depression Scale, with 75% and 83% of grouped cases correctly classified, respectively. Patients' NFI Depression Scale scores were mapped to the corresponding BDI categories, and 3 NFI score classifications emerged: minimally depressed (13-28), borderline depressed (29-42), and clinically depressed (43-65). Our study provided further evidence that screening for depression should be a standard component of TBI assessment protocols. Between 30% and 38% of patients with TBI were classified as depressed with the NFI Depression Scale and the BDI, respectively. Our findings also provided empirical evidence that the NFI Depression Scale is a useful tool for classifying postinjury depression.

  18. Drag and stability characteristics of a variety of reefed and unreefed parachute configurations at Mach 1.80 with an empirical correlation for supersonic Mach numbers

    NASA Technical Reports Server (NTRS)

    Couch, L. M.

    1975-01-01

    An investigation was conducted at Mach 1.80 in the Langley 4-foot supersonic pressure tunnel to determine the effects of variation in reefing ratio and geometric porosity on the drag and stability characteristics of four basic canopy types deployed in the wake of a cone-cylinder forebody. The basic designs included cross, hemisflo, disk-gap-band, and extended-skirt canopies; however, modular cross and standard flat canopies and a ballute were also investigated. An empirical correlation was determined which provides a fair estimation of the drag coefficients in transonic and supersonic flow for parachutes of specified geometric porosity and reefing ratio.

  19. Application of empirical and mechanistic-empirical pavement design procedures to Mn/ROAD concrete pavement test sections

    DOT National Transportation Integrated Search

    1997-05-01

    Current pavement design procedures are based principally on empirical approaches. The current trend toward developing more mechanistic-empirical type pavement design methods led Minnesota to develop the Minnesota Road Research Project (Mn/ROAD), a lo...

  20. A cohort study evaluation of maternal PCB exposure related to time to pregnancy in daughters.

    PubMed

    Gennings, Chris; Carrico, Caroline; Factor-Litvak, Pam; Krigbaum, Nickilou; Cirillo, Piera M; Cohn, Barbara A

    2013-08-20

    Polychlorinated biphenyls (PCBs) remain ubiquitous environmental contaminants. Developmental exposures are suspected to impact reproduction. Analysis of mixtures of PCBs may be problematic as components have a complex correlation structure, and along with limited sample sizes, standard regression strategies are problematic. We compared the results of a novel, empirical method to those based on categorization of PCB compounds by (1) hypothesized biological activity previously proposed and widely applied, and (2) degree of ortho- substitution (mono, di, tri), in a study of the relation of maternal serum PCBs and daughter's time to pregnancy. We measured PCBs in maternal serum samples collected in the early postpartum in 289 daughters in the Child Health and Development Studies birth cohort. We queried time to pregnancy in these daughters 28-31 years later. We applied a novel weighted quantile sum approach to find the bad-actor compounds in the PCB mixture found in maternal serum. The approach includes empirical estimation of the weights through a bootstrap step which accounts for the variation in the estimated weights. Bootstrap analyses indicated the dominant functionality groups associated with longer TTP were the dioxin-like, anti-estrogenic group (average weight, 22%) and PCBs not previously classified by biological activity (54%). In contrast, the unclassified PCBs were not important in the association with shorter TTP, where the anti-estrogenic groups and the PB-inducers group played a more important role (60% and 23%, respectively). The highly chlorinated PCBs (average weight, 89%) were mostly associated with longer TTP; in contrast, the degree of chlorination was less discriminating for shorter TTP. Finally, PCB 56 was associated with the strongest relationship with TTP with a weight of 47%. Our empirical approach found some associations previously identified by two classification schemes, but also identified other bad actors. This empirical method can generate hypotheses about mixture effects and mechanisms and overcomes some of the limitations of standard regression techniques.

  1. Strong anticipation and long-range cross-correlation: Application of detrended cross-correlation analysis to human behavioral data

    NASA Astrophysics Data System (ADS)

    Delignières, Didier; Marmelat, Vivien

    2014-01-01

    In this paper, we analyze empirical data, accounting for coordination processes between complex systems (bimanual coordination, interpersonal coordination, and synchronization with a fractal metronome), by using a recently proposed method: detrended cross-correlation analysis (DCCA). This work is motivated by the strong anticipation hypothesis, which supposes that coordination between complex systems is not achieved on the basis of local adaptations (i.e., correction, predictions), but results from a more global matching of complexity properties. Indeed, recent experiments have evidenced a very close correlation between the scaling properties of the series produced by two coordinated systems, despite a quite weak local synchronization. We hypothesized that strong anticipation should result in the presence of long-range cross-correlations between the series produced by the two systems. Results allow a detailed analysis of the effects of coordination on the fluctuations of the series produced by the two systems. In the long term, series tend to present similar scaling properties, with clear evidence of long-range cross-correlation. Short-term results strongly depend on the nature of the task. Simulation studies allow disentangling the respective effects of noise and short-term coupling processes on DCCA results, and suggest that the matching of long-term fluctuations could be the result of short-term coupling processes.

  2. A comparison of two indices for the intraclass correlation coefficient.

    PubMed

    Shieh, Gwowen

    2012-12-01

    In the present study, we examined the behavior of two indices for measuring the intraclass correlation in the one-way random effects model: the prevailing ICC(1) (Fisher, 1938) and the corrected eta-squared (Bliese & Halverson, 1998). These two procedures differ both in their methods of estimating the variance components that define the intraclass correlation coefficient and in their performance of bias and mean squared error in the estimation of the intraclass correlation coefficient. In contrast with the natural unbiased principle used to construct ICC(1), in the present study it was analytically shown that the corrected eta-squared estimator is identical to the maximum likelihood estimator and the pairwise estimator under equal group sizes. Moreover, the empirical results obtained from the present Monte Carlo simulation study across various group structures revealed the mutual dominance relationship between their truncated versions for negative values. The corrected eta-squared estimator performs better than the ICC(1) estimator when the underlying population intraclass correlation coefficient is small. Conversely, ICC(1) has a clear advantage over the corrected eta-squared for medium and large magnitudes of population intraclass correlation coefficient. The conceptual description and numerical investigation provide guidelines to help researchers choose between the two indices for more accurate reliability analysis in multilevel research.

  3. Relationship between Divergent Thinking and Intelligence: An Empirical Study of the Threshold Hypothesis with Chinese Children

    PubMed Central

    Shi, Baoguo; Wang, Lijing; Yang, Jiahui; Zhang, Mengpin; Xu, Li

    2017-01-01

    The threshold hypothesis is a classical and notable explanation for the relationship between creativity and intelligence. However, few empirical examinations of this theory exist, and the results are inconsistent. To test this hypothesis, this study investigated the relationship between divergent thinking (DT) and intelligence with a sample of 568 Chinese children aged between 11 and 13 years old using testing and questionnaire methods. The study focused on the breakpoint of intelligence and the moderation effect of openness on the relationship between intelligence and DT. The findings were as follows: (1) a breakpoint at the intelligence quotient (IQ) of 109.20 when investigating the relationship between either DT fluency or DT flexibility and intelligence. Another breakpoint was detected at the IQ of 116.80 concerning the correlation between originality and intelligence. The breakpoint of the relation between the composite score of creativity and intelligence occurred at the IQ of 110.10. (2) Openness to experience had a moderating effect on the correlation between the indicators of creativity and intelligence under the breakpoint. Above this point, however, the effect was not significant. The results suggested a relationship between DT and intelligence among Chinese children, which conforms to the threshold hypothesis. Besides, it remains necessary to explore the personality factors accounting for individual differences in the relationship between DT and intelligence. PMID:28275361

  4. Prediction of Agglomeration, Fouling, and Corrosion Tendency of Fuels in CFB Co-Combustion

    NASA Astrophysics Data System (ADS)

    Barišć, Vesna; Zabetta, Edgardo Coda; Sarkki, Juha

    Prediction of agglomeration, fouling, and corrosion tendency of fuels is essential to the design of any CFB boiler. During the years, tools have been successfully developed at Foster Wheeler to help with such predictions for the most commercial fuels. However, changes in fuel market and the ever-growing demand for co-combustion capabilities pose a continuous need for development. This paper presents results from recently upgraded models used at Foster Wheeler to predict agglomeration, fouling, and corrosion tendency of a variety of fuels and mixtures. The models, subject of this paper, are semi-empirical computer tools that combine the theoretical basics of agglomeration/fouling/corrosion phenomena with empirical correlations. Correlations are derived from Foster Wheeler's experience in fluidized beds, including nearly 10,000 fuel samples and over 1,000 tests in about 150 CFB units. In these models, fuels are evaluated based on their classification, their chemical and physical properties by standard analyses (proximate, ultimate, fuel ash composition, etc.;.) alongside with Foster Wheeler own characterization methods. Mixtures are then evaluated taking into account the component fuels. This paper presents the predictive capabilities of the agglomeration/fouling/corrosion probability models for selected fuels and mixtures fired in full-scale. The selected fuels include coals and different types of biomass. The models are capable to predict the behavior of most fuels and mixtures, but also offer possibilities for further improvements.

  5. Employment Condition, Economic Deprivation and Self-Evaluated Health in Europe: Evidence from EU-SILC 2009-2012.

    PubMed

    Bacci, Silvia; Pigini, Claudia; Seracini, Marco; Minelli, Liliana

    2017-02-03

    Background : The mixed empirical evidence about employment conditions (i.e., permanent vs. temporary job, full-time vs. part-time job) as well as unemployment has motivated the development of conceptual models with the aim of assessing the pathways leading to effects of employment status on health. Alongside physically and psychologically riskier working conditions, one channel stems in the possibly severe economic deprivation faced by temporary workers. We investigate whether economic deprivation is able to partly capture the effect of employment status on Self-evaluated Health Status (SHS). Methods : Our analysis is based on the European Union Statistics on Income and Living Conditions (EU-SILC) survey, for a balanced sample from 26 countries from 2009 to 2012. We estimate a correlated random-effects logit model for the SHS that accounts for the ordered nature of the dependent variable and the longitudinal structure of the data. Results and Discussion : Material deprivation and economic strain are able to partly account for the negative effects on SHS from precarious and part-time employment as well as from unemployment that, however, exhibits a significant independent negative association with SHS. Conclusions : Some of the indicators used to proxy economic deprivation are significant predictors of SHS and their correlation with the employment condition is such that it should not be neglected in empirical analysis, when available and further to the monetary income.

  6. Employment Condition, Economic Deprivation and Self-Evaluated Health in Europe: Evidence from EU-SILC 2009–2012

    PubMed Central

    Bacci, Silvia; Pigini, Claudia; Seracini, Marco; Minelli, Liliana

    2017-01-01

    Background: The mixed empirical evidence about employment conditions (i.e., permanent vs. temporary job, full-time vs. part-time job) as well as unemployment has motivated the development of conceptual models with the aim of assessing the pathways leading to effects of employment status on health. Alongside physically and psychologically riskier working conditions, one channel stems in the possibly severe economic deprivation faced by temporary workers. We investigate whether economic deprivation is able to partly capture the effect of employment status on Self-evaluated Health Status (SHS). Methods: Our analysis is based on the European Union Statistics on Income and Living Conditions (EU-SILC) survey, for a balanced sample from 26 countries from 2009 to 2012. We estimate a correlated random-effects logit model for the SHS that accounts for the ordered nature of the dependent variable and the longitudinal structure of the data. Results and Discussion: Material deprivation and economic strain are able to partly account for the negative effects on SHS from precarious and part-time employment as well as from unemployment that, however, exhibits a significant independent negative association with SHS. Conclusions: Some of the indicators used to proxy economic deprivation are significant predictors of SHS and their correlation with the employment condition is such that it should not be neglected in empirical analysis, when available and further to the monetary income. PMID:28165375

  7. A simulation study to quantify the impacts of exposure ...

    EPA Pesticide Factsheets

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll

  8. Intelligent diagnosis of short hydraulic signal based on improved EEMD and SVM with few low-dimensional training samples

    NASA Astrophysics Data System (ADS)

    Zhang, Meijun; Tang, Jian; Zhang, Xiaoming; Zhang, Jiaojiao

    2016-03-01

    The high accurate classification ability of an intelligent diagnosis method often needs a large amount of training samples with high-dimensional eigenvectors, however the characteristics of the signal need to be extracted accurately. Although the existing EMD(empirical mode decomposition) and EEMD(ensemble empirical mode decomposition) are suitable for processing non-stationary and non-linear signals, but when a short signal, such as a hydraulic impact signal, is concerned, their decomposition accuracy become very poor. An improve EEMD is proposed specifically for short hydraulic impact signals. The improvements of this new EEMD are mainly reflected in four aspects, including self-adaptive de-noising based on EEMD, signal extension based on SVM(support vector machine), extreme center fitting based on cubic spline interpolation, and pseudo component exclusion based on cross-correlation analysis. After the energy eigenvector is extracted from the result of the improved EEMD, the fault pattern recognition based on SVM with small amount of low-dimensional training samples is studied. At last, the diagnosis ability of improved EEMD+SVM method is compared with the EEMD+SVM and EMD+SVM methods, and its diagnosis accuracy is distinctly higher than the other two methods no matter the dimension of the eigenvectors are low or high. The improved EEMD is very propitious for the decomposition of short signal, such as hydraulic impact signal, and its combination with SVM has high ability for the diagnosis of hydraulic impact faults.

  9. Combining Structural Modeling with Ensemble Machine Learning to Accurately Predict Protein Fold Stability and Binding Affinity Effects upon Mutation

    PubMed Central

    Garcia Lopez, Sebastian; Kim, Philip M.

    2014-01-01

    Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403

  10. Denoising of Raman spectroscopy for biological samples based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    León-Bejarano, Fabiola; Ramírez-Elías, Miguel; Mendez, Martin O.; Dorantes-Méndez, Guadalupe; Rodríguez-Aranda, Ma. Del Carmen; Alba, Alfonso

    Raman spectroscopy of biological samples presents undesirable noise and fluorescence generated by the biomolecular excitation. The reduction of these types of noise is a fundamental task to obtain the valuable information of the sample under analysis. This paper proposes the application of the empirical mode decomposition (EMD) for noise elimination. EMD is a parameter-free and adaptive signal processing method useful for the analysis of nonstationary signals. EMD performance was compared with the commonly used Vancouver algorithm (VRA) through artificial data (Teflon), synthetic (Vitamin E and paracetamol) and biological (Mouse brain and human nails) Raman spectra. The correlation coefficient (ρ) was used as performance measure. Results on synthetic data showed a better performance of EMD (ρ=0.52) at high noise levels compared with VRA (ρ=0.19). The methods with simulated fluorescence added to artificial material exhibited a similar shape of fluorescence in both cases (ρ=0.95 for VRA and ρ=0.93 for EMD). For synthetic data, Raman spectra of vitamin E were used and the results showed a good performance comparing both methods (ρ=0.95 for EMD and ρ=0.99 for VRA). Finally, in biological data, EMD and VRA displayed a similar behavior (ρ=0.85 for EMD and ρ=0.96 for VRA), but with the advantage that EMD maintains small amplitude Raman peaks. The results suggest that EMD could be an effective method for denoising biological Raman spectra, EMD is able to retain information and correctly eliminates the fluorescence without parameter tuning.

  11. Relationship between pore geometric characteristics and SIP/NMR parameters observed for mudstones

    NASA Astrophysics Data System (ADS)

    Robinson, J.; Slater, L. D.; Keating, K.; Parker, B. L.; Robinson, T.

    2017-12-01

    The reliable estimation of permeability remains one of the most challenging problems in hydrogeological characterization. Cost effective, non-invasive geophysical methods such as spectral induced polarization (SIP) and nuclear magnetic resonance (NMR) offer an alternative to traditional sampling methods as they are sensitive to the mineral surfaces and pore spaces that control permeability. We performed extensive physical characterization, SIP and NMR geophysical measurements on fractured rock cores extracted from a mudstone site in an effort to compare 1) the pore size characterization determined from traditional and geophysical methods and 2) the performance of permeability models based on these methods. We focus on two physical characterizations that are well-correlated with hydraulic properties: the pore volume normalized surface area (Spor) and an interconnected pore diameter (Λ). We find the SIP polarization magnitude and relaxation time are better correlated with Spor than Λ, the best correlation of these SIP measures for our sample dataset was found with Spor divided by the electrical formation factor (F). NMR parameters are, similarly, better correlated with Spor than Λ. We implement previously proposed mechanistic and empirical permeability models using SIP and NMR parameters. A sandstone-calibrated SIP model using a polarization magnitude does not perform well while a SIP model using a mean relaxation time performs better in part by more sufficiently accounting for the effects of fluid chemistry. A sandstone-calibrated NMR permeability model using an average measure of the relaxation time does not perform well, presumably due to small pore sizes which are either not connected or contain water of limited mobility. An NMR model based on the laboratory determined portions of the bound versus mobile portions of the relaxation distribution performed reasonably well. While limitations exist, there are many opportunities to use geophysical data to predict permeability in mudstone formations.

  12. Droplet breakup in accelerating gas flows. Part 2: Secondary atomization

    NASA Technical Reports Server (NTRS)

    Zajac, L. J.

    1973-01-01

    An experimental investigation to determine the effects of an accelerating gas flow on the atomization characteristics of liquid sprays was conducted. The sprays were produced by impinging two liquid jets. The liquid was molten wax and the gas was nitrogen. The use of molten wax allowed for a quantitative measure of the resulting dropsize distribution. The results of this study, indicate that a significant amount of droplet breakup will occur as a result of the action of the gas on the liquid droplets. Empirical correlations are presented in terms of parameters that were found to affect the mass median dropsize most significantly, the orifice diameter, the liquid injection velocity, and the maximum gas velocity. An empirical correlation for the normalized dropsize distribution is also presented. These correlations are in a form that may be incorporated readily into existing combustion model computer codes for the purpose of calculating rocket engine combustion performance.

  13. Oxidation stability of biodiesel fuels and blends using the Rancimat and PetroOXY methods. Effect of 4-allyl-2,6-dimetoxiphenol and cathecol as biodiesel additives on oxidation stability

    NASA Astrophysics Data System (ADS)

    Botella, Lucía; Bimbela, Fernando; Martín, Lorena; Arauzo, Jesús; Sanchez, Jose Luis

    2014-07-01

    In the present work, several fatty acid methyl esters (FAME) have been synthesized from various fatty acid feedstocks: used frying olive oil, pork fat, soybean, rapeseed, sunflower and coconut. The oxidation stabilities of the biodiesel samples and of several blends have been measured simultaneously by both the Rancimat method, accepted by EN14112 standard, and the PetroOXY method, prEN16091 standard, with the aim of finding a correlation between both methodologies. Other biodiesel properties such as composition, cold filter plugging point (CFPP), flash point (FP) and kinematic viscosity have also been analyzed using standard methods in order to further characterize the biodiesel produced. In addition, the effect on the biodiesel properties of using 4-allyl-2,6-dimetoxiphenol and cathecol as additives in biodiesel blends with rapeseed and with soybean has also been analyzed. The use of both antioxidants results in a considerable improvement in the oxidation stability of both types of biodiesel, especially using cathecol. Adding cathecol loads as low as 0.05 % (m/m) in blends with soybean biodiesel and as low as 0.10 % (m/m) in blends with rapeseed biodiesel is sufficient for the oxidation stabilities to comply with the restrictions established by the European EN14214 standard.An empirical linear equation is proposed to correlate the oxidation stability by the two methods, PetroOXY and Rancimat. It has been found that the presence of either cathecol or 4-allyl-2,6-dimetoxiphenol as additives affects the correlation observed.

  14. Implementations of geographically weighted lasso in spatial data with multicollinearity (Case study: Poverty modeling of Java Island)

    NASA Astrophysics Data System (ADS)

    Setiyorini, Anis; Suprijadi, Jadi; Handoko, Budhi

    2017-03-01

    Geographically Weighted Regression (GWR) is a regression model that takes into account the spatial heterogeneity effect. In the application of the GWR, inference on regression coefficients is often of interest, as is estimation and prediction of the response variable. Empirical research and studies have demonstrated that local correlation between explanatory variables can lead to estimated regression coefficients in GWR that are strongly correlated, a condition named multicollinearity. It later results on a large standard error on estimated regression coefficients, and, hence, problematic for inference on relationships between variables. Geographically Weighted Lasso (GWL) is a method which capable to deal with spatial heterogeneity and local multicollinearity in spatial data sets. GWL is a further development of GWR method, which adds a LASSO (Least Absolute Shrinkage and Selection Operator) constraint in parameter estimation. In this study, GWL will be applied by using fixed exponential kernel weights matrix to establish a poverty modeling of Java Island, Indonesia. The results of applying the GWL to poverty datasets show that this method stabilizes regression coefficients in the presence of multicollinearity and produces lower prediction and estimation error of the response variable than GWR does.

  15. Airspace Dimension Assessment with nanoparticles reflects lung density as quantified by MRI

    PubMed Central

    Jakobsson, Jonas K; Löndahl, Jakob; Olsson, Lars E; Diaz, Sandra; Zackrisson, Sophia; Wollmer, Per

    2018-01-01

    Background Airspace Dimension Assessment with inhaled nanoparticles is a novel method to determine distal airway morphology. This is the first empirical study using Airspace Dimension Assessment with nanoparticles (AiDA) to estimate distal airspace radius. The technology is relatively simple and potentially accessible in clinical outpatient settings. Method Nineteen never-smoking volunteers performed nanoparticle inhalation tests at multiple breath-hold times, and the difference in nanoparticle concentration of inhaled and exhaled gas was measured. An exponential decay curve was fitted to the concentration of recovered nanoparticles, and airspace dimensions were assessed from the half-life of the decay. Pulmonary tissue density was measured using magnetic resonance imaging (MRI). Results The distal airspace radius measured by AiDA correlated with lung tissue density as measured by MRI (ρ = −0.584; p = 0.0086). The linear intercept of the logarithm of the exponential decay curve correlated with forced expiratory volume in one second (FEV1) (ρ = 0.549; p = 0.0149). Conclusion The AiDA method shows potential to be developed into a tool to assess conditions involving changes in distal airways, eg, emphysema. The intercept may reflect airway properties; this finding should be further investigated.

  16. Factors influencing suspended solids concentrations in activated sludge settling tanks.

    PubMed

    Kim, Y; Pipes, W O

    1999-05-31

    A significant fraction of the total mass of sludge in an activated sludge process may be in the settling tanks if the sludge has a high sludge volume index (SVI) or when a hydraulic overload occurs during a rainstorm. Under those conditions, an accurate estimate of the amount of sludge in the settling tanks is needed in order to calculate the mean cell residence time or to determine the capacity of the settling tanks to store sludge. Determination of the amount of sludge in the settling tanks requires estimation of the average concentration of suspended solids in the layer of sludge (XSB) in the bottom of the settling tanks. A widely used reference recommends averaging the concentrations of suspended solids in the mixed liquor (X) and in the underflow (Xu) from the settling tanks (XSB=0. 5{X+Xu}). This method does not take into consideration other pertinent information available to an operator. This is a report of a field study which had the objective of developing a more accurate method for estimation of the XSB in the bottom of the settling tanks. By correlation analysis, it was found that only 44% of the variation in the measured XSB is related to sum of X and Xu. XSB is also influenced by the SVI, the zone settling velocity at X and the overflow and underflow rates of the settling tanks. The method of averaging X and Xu tends to overestimate the XSB. A new empirical estimation technique for XSB was developed. The estimation technique uses dimensionless ratios; i.e., the ratio of XSB to Xu, the ratio of the overflow rate to the sum of the underflow rate and the initial settling velocity of the mixed liquor and sludge compaction expressed as a ratio (dimensionless SVI). The empirical model is compared with the method of averaging X and Xu for the entire range of sludge depths in the settling tanks and for SVI values between 100 and 300 ml/g. Since the empirical model uses dimensionless ratios, the regression parameters are also dimensionless and the model can be readily adopted for other activated sludge processes. A simplified version of the empirical model provides an estimation of XSB as a function of X, Xu and SVf and can be used by an operator when flow conditions are normal. Copyright 1999 Elsevier Science B.V.

  17. Fight the power: the limits of empiricism and the costs of positivistic rigor.

    PubMed

    Indick, William

    2002-01-01

    A summary of the influence of positivistic philosophy and empiricism on the field of psychology is followed by a critique of the empirical method. The dialectic process is advocated as an alternative method of inquiry. The main advantage of the dialectic method is that it is open to any logical argument, including empirical hypotheses, but unlike empiricism, it does not automatically reject arguments that are not based on observable data. Evolutionary and moral psychology are discussed as examples of important fields of study that could benefit from types of arguments that frequently do not conform to the empirical standards of systematic observation and falsifiability of hypotheses. A dialectic method is shown to be a suitable perspective for those fields of research, because it allows for logical arguments that are not empirical and because it fosters a functionalist perspective, which is indispensable for both evolutionary and moral theories. It is suggested that all psychologists may gain from adopting a dialectic approach, rather than restricting themselves to empirical arguments alone.

  18. The Use of Empirical Methods for Testing Granular Materials in Analogue Modelling

    PubMed Central

    Montanari, Domenico; Agostini, Andrea; Bonini, Marco; Corti, Giacomo; Del Ventisette, Chiara

    2017-01-01

    The behaviour of a granular material is mainly dependent on its frictional properties, angle of internal friction, and cohesion, which, together with material density, are the key factors to be considered during the scaling procedure of analogue models. The frictional properties of a granular material are usually investigated by means of technical instruments such as a Hubbert-type apparatus and ring shear testers, which allow for investigating the response of the tested material to a wide range of applied stresses. Here we explore the possibility to determine material properties by means of different empirical methods applied to mixtures of quartz and K-feldspar sand. Empirical methods exhibit the great advantage of measuring the properties of a certain analogue material under the experimental conditions, which are strongly sensitive to the handling techniques. Finally, the results obtained from the empirical methods have been compared with ring shear tests carried out on the same materials, which show a satisfactory agreement with those determined empirically. PMID:28772993

  19. An Empirical Examination of the Anomie Theory of Drug Use.

    ERIC Educational Resources Information Center

    Dull, R. Thomas

    1983-01-01

    Investigated the relationship between anomie theory, as measured by Srole's Anomie Scale, and self-admitted drug use in an adult population (N=1,449). Bivariate cross-comparison correlations indicated anomie was significantly correlated with several drug variables, but these associations were extremely weak and of little explanatory value.…

  20. Physical Activity and Psychological Correlates during an After-School Running Club

    ERIC Educational Resources Information Center

    Kahan, David; McKenzie, Thomas L.

    2018-01-01

    Background: After-school programs (ASPs) have the potential to contribute to moderate-to-vigorous physical activity (MVPA), but there is limited empirical evidence to guide their development and implementation. Purpose: This study assessed the replication of an elementary school running program and identified psychological correlates of children's…

  1. The Effectiveness of Using Limited Gauge Measurements for Bias Adjustment of Satellite-Based Precipitation Estimation over Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2018-01-01

    Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.

  2. The External Performance Appraisal of China Energy Regulation: An Empirical Study Using a TOPSIS Method Based on Entropy Weight and Mahalanobis Distance

    PubMed Central

    Li, Dan-Dan; Zheng, Hong-Hao

    2018-01-01

    In China’s industrialization process, the effective regulation of energy and environment can promote the positive externality of energy consumption while reducing negative externality, which is an important means for realizing the sustainable development of an economic society. The study puts forward an improved technique for order preference by similarity to an ideal solution based on entropy weight and Mahalanobis distance (briefly referred as E-M-TOPSIS). The performance of the approach was verified to be satisfactory. By separately using traditional and improved TOPSIS methods, the study carried out the empirical appraisals on the external performance of China’s energy regulation during 1999~2015. The results show that the correlation between the performance indexes causes the significant difference between the appraisal results of E-M-TOPSIS and traditional TOPSIS. The E-M-TOPSIS takes the correlation between indexes into account and generally softens the closeness degree compared with traditional TOPSIS. Moreover, it makes the relative closeness degree fluctuate within a small-amplitude. The results conform to the practical condition of China’s energy regulation and therefore the E-M-TOPSIS is favorably applicable for the external performance appraisal of energy regulation. Additionally, the external economic performance and social responsibility performance (including environmental and energy safety performances) based on the E-M-TOPSIS exhibit significantly different fluctuation trends. The external economic performance dramatically fluctuates with a larger fluctuation amplitude, while the social responsibility performance exhibits a relatively stable interval fluctuation. This indicates that compared to the social responsibility performance, the fluctuation of external economic performance is more sensitive to energy regulation. PMID:29385781

  3. Empirical deck for phased construction and widening [summary].

    DOT National Transportation Integrated Search

    2017-06-01

    The most common method used to design and analyze bridge decks, termed the traditional : method, treats a deck slab as if it were made of strips supported by inflexible girders. An : alternative the empirical method treats the deck slab as a ...

  4. Confirmatory factor analysis of the Child Oral Health Impact Profile (Korean version).

    PubMed

    Cho, Young Il; Lee, Soonmook; Patton, Lauren L; Kim, Hae-Young

    2016-04-01

    Empirical support for the factor structure of the Child Oral Health Impact Profile (COHIP) has not been fully established. The purposes of this study were to evaluate the factor structure of the Korean version of the COHIP (COHIP-K) empirically using confirmatory factor analysis (CFA) based on the theoretical framework and then to assess whether any of the factors in the structure could be grouped into a simpler single second-order factor. Data were collected through self-reported COHIP-K responses from a representative community sample of 2,236 Korean children, 8-15 yr of age. Because a large inter-factor correlation of 0.92 was estimated in the original five-factor structure, the two strongly correlated factors were combined into one factor, resulting in a four-factor structure. The revised four-factor model showed a reasonable fit with appropriate inter-factor correlations. Additionally, the second-order model with four sub-factors was reasonable with sufficient fit and showed equal fit to the revised four-factor model. A cross-validation procedure confirmed the appropriateness of the findings. Our analysis empirically supported a four-factor structure of COHIP-K, a summarized second-order model, and the use of an integrated summary COHIP score. © 2016 Eur J Oral Sci.

  5. A DEIM Induced CUR Factorization

    DTIC Science & Technology

    2015-09-18

    CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given

  6. Role of local network oscillations in resting-state functional connectivity.

    PubMed

    Cabral, Joana; Hugues, Etienne; Sporns, Olaf; Deco, Gustavo

    2011-07-01

    Spatio-temporally organized low-frequency fluctuations (<0.1 Hz), observed in BOLD fMRI signal during rest, suggest the existence of underlying network dynamics that emerge spontaneously from intrinsic brain processes. Furthermore, significant correlations between distinct anatomical regions-or functional connectivity (FC)-have led to the identification of several widely distributed resting-state networks (RSNs). This slow dynamics seems to be highly structured by anatomical connectivity but the mechanism behind it and its relationship with neural activity, particularly in the gamma frequency range, remains largely unknown. Indeed, direct measurements of neuronal activity have revealed similar large-scale correlations, particularly in slow power fluctuations of local field potential gamma frequency range oscillations. To address these questions, we investigated neural dynamics in a large-scale model of the human brain's neural activity. A key ingredient of the model was a structural brain network defined by empirically derived long-range brain connectivity together with the corresponding conduction delays. A neural population, assumed to spontaneously oscillate in the gamma frequency range, was placed at each network node. When these oscillatory units are integrated in the network, they behave as weakly coupled oscillators. The time-delayed interaction between nodes is described by the Kuramoto model of phase oscillators, a biologically-based model of coupled oscillatory systems. For a realistic setting of axonal conduction speed, we show that time-delayed network interaction leads to the emergence of slow neural activity fluctuations, whose patterns correlate significantly with the empirically measured FC. The best agreement of the simulated FC with the empirically measured FC is found for a set of parameters where subsets of nodes tend to synchronize although the network is not globally synchronized. Inside such clusters, the simulated BOLD signal between nodes is found to be correlated, instantiating the empirically observed RSNs. Between clusters, patterns of positive and negative correlations are observed, as described in experimental studies. These results are found to be robust with respect to a biologically plausible range of model parameters. In conclusion, our model suggests how resting-state neural activity can originate from the interplay between the local neural dynamics and the large-scale structure of the brain. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Quantifying the range of cross-correlated fluctuations using a q- L dependent AHXA coefficient

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wang, Lin; Chen, Yuming

    2018-03-01

    Recently, based on analogous height cross-correlation analysis (AHXA), a cross-correlation coefficient ρ×(L) has been proposed to quantify the levels of cross-correlation on different temporal scales for bivariate series. A limitation of this coefficient is that it cannot capture the full information of cross-correlations on amplitude of fluctuations. In fact, it only detects the cross-correlation at a specific order fluctuation, which might neglect some important information inherited from other order fluctuations. To overcome this disadvantage, in this work, based on the scaling of the qth order covariance and time delay L, we define a two-parameter dependent cross-correlation coefficient ρq(L) to detect and quantify the range and level of cross-correlations. This new version of ρq(L) coefficient leads to the formation of a ρq(L) surface, which not only is able to quantify the level of cross-correlations, but also allows us to identify the range of fluctuation amplitudes that are correlated in two given signals. Applications to the classical ARFIMA models and the binomial multifractal series illustrate the feasibility of this new coefficient ρq(L) . In addition, a statistical test is proposed to quantify the existence of cross-correlations between two given series. Applying our method to the real life empirical data from the 1999-2000 California electricity market, we find that the California power crisis in 2000 destroys the cross-correlation between the price and the load series but does not affect the correlation of the load series during and before the crisis.

  8. Improving public transportation systems with self-organization: A headway-based model and regulation of passenger alighting and boarding.

    PubMed

    Carreón, Gustavo; Gershenson, Carlos; Pineda, Luis A

    2017-01-01

    The equal headway instability-the fact that a configuration with regular time intervals between vehicles tends to be volatile-is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system's data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger's inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems.

  9. Improving public transportation systems with self-organization: A headway-based model and regulation of passenger alighting and boarding

    PubMed Central

    Gershenson, Carlos; Pineda, Luis A.

    2017-01-01

    The equal headway instability—the fact that a configuration with regular time intervals between vehicles tends to be volatile—is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system’s data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger’s inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems. PMID:29287120

  10. [Mobbing: a meta-analysis and integrative model of its antecedents and consequences].

    PubMed

    Topa Cantisano, Gabriela; Depolo, Marco; Morales Domínguez, J Francisco

    2007-02-01

    Although mobbing has been extensively studied, empirical research has not led to firm conclusions regarding its antecedents and consequences, both at personal and organizational levels. An extensive literature search yielded 86 empirical studies with 93 samples. The matrix correlation obtained through meta-analytic techniques was used to test a structural equation model. Results supported hypotheses regarding organizational environmental factors as main predictors of mobbing.

  11. An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers

    USGS Publications Warehouse

    Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.

    2016-01-01

    Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.

  12. Testing a single regression coefficient in high dimensional linear models

    PubMed Central

    Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2017-01-01

    In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668

  13. Testing a single regression coefficient in high dimensional linear models.

    PubMed

    Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2016-11-01

    In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.

  14. Closed loop oscillating heat pipe as heating device for copper plate

    NASA Astrophysics Data System (ADS)

    Kamonpet, Patrapon; Sangpen, Waranphop

    2017-04-01

    In manufacturing parts by molding method, temperature uniformity of the mold holds a very crucial aspect for the quality of the parts. Studies have been carried out in searching for effective method in controlling the mold temperature. Using of heat pipe is one of the many effective ways to control the temperature of the molding area to the right uniform level. Recently, there has been the development of oscillating heat pipe and its application is very promising. The semi-empirical correlation for closed-loop oscillating heat pipe (CLOHP) with the STD of ±30% was used in design of CLOHP in this study. By placing CLOHP in the copper plate at some distance from the plate surface and allow CLOHP to heat the plate up to the set surface temperature, the temperature of the plate was recorded. It is found that CLOHP can be effectively used as a heat source to transfer heat to copper plate with excellent temperature distribution. The STDs of heat rate of all experiments are well in the range of ±30% of the correlation used.

  15. Uncovering collective listening habits and music genres in bipartite networks.

    PubMed

    Lambiotte, R; Ausloos, M

    2005-12-01

    In this paper, we analyze web-downloaded data on people sharing their music library, that we use as their individual musical signatures. The system is represented by a bipartite network, nodes being the music groups and the listeners. Music groups' audience size behaves like a power law, but the individual music library size is an exponential with deviations at small values. In order to extract structures from the network, we focus on correlation matrices, that we filter by removing the least correlated links. This percolation idea-based method reveals the emergence of social communities and music genres, that are visualized by a branching representation. Evidence of collective listening habits that do not fit the neat usual genres defined by the music industry indicates an alternative way of classifying listeners and music groups. The structure of the network is also studied by a more refined method, based upon a random walk exploration of its properties. Finally, a personal identification-community imitation model for growing bipartite networks is outlined, following Potts ingredients. Simulation results do reproduce quite well the empirical data.

  16. Uncovering collective listening habits and music genres in bipartite networks

    NASA Astrophysics Data System (ADS)

    Lambiotte, R.; Ausloos, M.

    2005-12-01

    In this paper, we analyze web-downloaded data on people sharing their music library, that we use as their individual musical signatures. The system is represented by a bipartite network, nodes being the music groups and the listeners. Music groups’ audience size behaves like a power law, but the individual music library size is an exponential with deviations at small values. In order to extract structures from the network, we focus on correlation matrices, that we filter by removing the least correlated links. This percolation idea-based method reveals the emergence of social communities and music genres, that are visualized by a branching representation. Evidence of collective listening habits that do not fit the neat usual genres defined by the music industry indicates an alternative way of classifying listeners and music groups. The structure of the network is also studied by a more refined method, based upon a random walk exploration of its properties. Finally, a personal identification-community imitation model for growing bipartite networks is outlined, following Potts ingredients. Simulation results do reproduce quite well the empirical data.

  17. Determination of dynamic fracture toughness using a new experimental technique

    NASA Astrophysics Data System (ADS)

    Cady, Carl M.; Liu, Cheng; Lovato, Manuel L.

    2015-09-01

    In other studies dynamic fracture toughness has been measured using Charpy impact and modified Hopkinson Bar techniques. In this paper results will be shown for the measurement of fracture toughness using a new test geometry. The crack propagation velocities range from ˜0.15 mm/s to 2.5 m/s. Digital image correlation (DIC) will be the technique used to measure both the strain and the crack growth rates. The boundary of the crack is determined using the correlation coefficient generated during image analysis and with interframe timing the crack growth rate and crack opening can be determined. A comparison of static and dynamic loading experiments will be made for brittle polymeric materials. The analysis technique presented by Sammis et al. [1] is a semi-empirical solution, however, additional Linear Elastic Fracture Mechanics analysis of the strain fields generated as part of the DIC analysis allow for the more commonly used method resembling the crack tip opening displacement (CTOD) experiment. It should be noted that this technique was developed because limited amounts of material were available and crack growth rates were to fast for a standard CTOD method.

  18. Structural and spectroscopic characterization of methyl isocyanate, methyl cyanate, methyl fulminate, and acetonitrile N-oxide using highly correlated ab initio methods.

    PubMed

    Dalbouha, S; Senent, M L; Komiha, N; Domínguez-Gómez, R

    2016-09-28

    Various astrophysical relevant molecules obeying the empirical formula C 2 H 3 NO are characterized using explicitly correlated coupled cluster methods (CCSD(T)-F12). Rotational and rovibrational parameters are provided for four isomers: methyl isocyanate (CH 3 NCO), methyl cyanate (CH 3 OCN), methyl fulminate (CH 3 ONC), and acetonitrile N-oxide (CH 3 CNO). A CH 3 CON transition state is inspected. A variational procedure is employed to explore the far infrared region because some species present non-rigidity. Second order perturbation theory is used for the determination of anharmonic frequencies, rovibrational constants, and to predict Fermi resonances. Three species, methyl cyanate, methyl fulminate, and CH 3 CON, show a unique methyl torsion hindered by energy barriers. In methyl isocyanate, the methyl group barrier is so low that the internal top can be considered a free rotor. On the other hand, acetonitrile N-oxide presents a linear skeleton, C 3v symmetry, and free internal rotation. Its equilibrium geometry depends strongly on electron correlation. The remaining isomers present a bend skeleton. Divergences between theoretical rotational constants and previous parameters fitted from observed lines for methyl isocyanate are discussed on the basis of the relevant rovibrational interaction and the quasi-linearity of the molecular skeleton.

  19. Bias corrections of GOSAT SWIR XCO2 and XCH4 with TCCON data and their evaluation using aircraft measurement data

    NASA Astrophysics Data System (ADS)

    Inoue, Makoto; Morino, Isamu; Uchino, Osamu; Nakatsuru, Takahiro; Yoshida, Yukio; Yokota, Tatsuya; Wunch, Debra; Wennberg, Paul O.; Roehl, Coleen M.; Griffith, David W. T.; Velazco, Voltaire A.; Deutscher, Nicholas M.; Warneke, Thorsten; Notholt, Justus; Robinson, John; Sherlock, Vanessa; Hase, Frank; Blumenstock, Thomas; Rettinger, Markus; Sussmann, Ralf; Kyrö, Esko; Kivi, Rigel; Shiomi, Kei; Kawakami, Shuji; De Mazière, Martine; Arnold, Sabrina G.; Feist, Dietrich G.; Barrow, Erica A.; Barney, James; Dubey, Manvendra; Schneider, Matthias; Iraci, Laura T.; Podolske, James R.; Hillyard, Patrick W.; Machida, Toshinobu; Sawa, Yousuke; Tsuboi, Kazuhiro; Matsueda, Hidekazu; Sweeney, Colm; Tans, Pieter P.; Andrews, Arlyn E.; Biraud, Sebastien C.; Fukuyama, Yukio; Pittman, Jasna V.; Kort, Eric A.; Tanaka, Tomoaki

    2016-08-01

    We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO2 (XCO2) and CH4 (XCH4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO2 and XCH4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO2/XCH4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.

  20. Structural and spectroscopic characterization of methyl isocyanate, methyl cyanate, methyl fulminate, and acetonitrile N-oxide using highly correlated ab initio methods

    NASA Astrophysics Data System (ADS)

    Dalbouha, S.; Senent, M. L.; Komiha, N.; Domínguez-Gómez, R.

    2016-09-01

    Various astrophysical relevant molecules obeying the empirical formula C2H3NO are characterized using explicitly correlated coupled cluster methods (CCSD(T)-F12). Rotational and rovibrational parameters are provided for four isomers: methyl isocyanate (CH3NCO), methyl cyanate (CH3OCN), methyl fulminate (CH3ONC), and acetonitrile N-oxide (CH3CNO). A CH3CON transition state is inspected. A variational procedure is employed to explore the far infrared region because some species present non-rigidity. Second order perturbation theory is used for the determination of anharmonic frequencies, rovibrational constants, and to predict Fermi resonances. Three species, methyl cyanate, methyl fulminate, and CH3CON, show a unique methyl torsion hindered by energy barriers. In methyl isocyanate, the methyl group barrier is so low that the internal top can be considered a free rotor. On the other hand, acetonitrile N-oxide presents a linear skeleton, C3v symmetry, and free internal rotation. Its equilibrium geometry depends strongly on electron correlation. The remaining isomers present a bend skeleton. Divergences between theoretical rotational constants and previous parameters fitted from observed lines for methyl isocyanate are discussed on the basis of the relevant rovibrational interaction and the quasi-linearity of the molecular skeleton.

  1. Imaging subsurface hydrothermal structure using a dense geophone array in Yellowstone

    NASA Astrophysics Data System (ADS)

    Wu, S. M.; Lin, F. C.; Farrell, J.; Smith, R. B.

    2016-12-01

    The recent development of ambient noise cross-correlation and the availability of large N seismic arrays allow for the study of detailed shallow crustal structure. In this study, we apply multi-component noise cross-correlation to explore shallow hydrothermal structure near Old Faithful geyser in Yellowstone National Park using a temporary geophone array. The array was composed of 133 three-component 5-Hz geophones and was deployed for two weeks during November 2015. The average station spacing is 50 meters and the full aperture of the array is around 1 km with good azimuthal and spatial coverage. The Upper Geyser Basin, where Old Faithful is located, has the largest concentration of geysers in the world. This unique active hydrothermal environment and hence the extremely inhomogeneous noise source distribution makes the construction of empirical Green's functions difficult based on the traditional noise cross-correlation method. In this presentation, we show examples of the constructed cross-correlation functions and demonstrate their spatial and temporal relationships with known hydrothermal activity. We also demonstrate how useful seismic signals can be extracted from these cross-correlation functions and used for subsurface imaging. In particular, we will discuss the existence of a recharge cavity beneath Old Faithful revealed by the noise cross-correlations. In addition, we also investigated the temporal structure variation based on time-lapse noise cross-correlations and these preliminary results will also be discussed.

  2. Automatic Classification of Extensive Aftershock Sequences Using Empirical Matched Field Processing

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Harris, David B.; Kværna, Tormod; Dodge, Douglas A.

    2013-04-01

    The aftershock sequences that follow large earthquakes create considerable problems for data centers attempting to produce comprehensive event bulletins in near real-time. The greatly increased number of events which require processing can overwhelm analyst resources and reduce the capacity for analyzing events of monitoring interest. This exacerbates a potentially reduced detection capability at key stations, due the noise generated by the sequence, and a deterioration in the quality of the fully automatic preliminary event bulletins caused by the difficulty in associating the vast numbers of closely spaced arrivals over the network. Considerable success has been enjoyed by waveform correlation methods for the automatic identification of groups of events belonging to the same geographical source region, facilitating the more time-efficient analysis of event ensembles as opposed to individual events. There are, however, formidable challenges associated with the automation of correlation procedures. The signal generated by a very large earthquake seldom correlates well enough with the signals generated by far smaller aftershocks for a correlation detector to produce statistically significant triggers at the correct times. Correlation between events within clusters of aftershocks is significantly better, although the issues of when and how to initiate new pattern detectors are still being investigated. Empirical Matched Field Processing (EMFP) is a highly promising method for detecting event waveforms suitable as templates for correlation detectors. EMFP is a quasi-frequency-domain technique that calibrates the spatial structure of a wavefront crossing a seismic array in a collection of narrow frequency bands. The amplitude and phase weights that result are applied in a frequency-domain beamforming operation that compensates for scattering and refraction effects not properly modeled by plane-wave beams. It has been demonstrated to outperform waveform correlation as a classifier of ripple-fired mining blasts since the narrowband procedure is insensitive to differences in the source-time functions. For sequences in which the spectral content and time-histories of the signals from the main shock and aftershocks vary greatly, the spatial structure calibrated by EMFP is an invariant that permits reliable detection of events in the specific source region. Examples from the 2005 Kashmir and 2011 Van earthquakes demonstrate how EMFP templates from the main events detect arrivals from the aftershock sequences with high sensitivity and exceptionally low false alarm rates. Classical waveform correlation detectors are demonstrated to fail for these examples. Even arrivals with SNR below unity can produce significant EMFP triggers as the spatial pattern of the incoming wavefront is identified, leading to robust detections at a greater number of stations and potentially more reliable automatic bulletins. False EMFP triggers are readily screened by scanning a space of phase shifts relative to the imposed template. EMFP has the potential to produce a rapid and robust overview of the evolving aftershock sequence such that correlation and subspace detectors can be applied semi-autonomously, with well-chosen parameter specifications, to identify and classify clusters of very closely spaced aftershocks.

  3. Towards a universal method for calculating hydration free energies: a 3D reference interaction site model with partial molar volume correction.

    PubMed

    Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V

    2010-12-15

    We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol (- 1) for a test set of 120 organic molecules).

  4. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student’s t-distribution*

    PubMed Central

    Leão, William L.; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210

  5. Identifying the perceptive users for online social systems

    PubMed Central

    Liu, Xiao-Lu; Guo, Qiang; Han, Jing-Ti

    2017-01-01

    In this paper, the perceptive user, who could identify the high-quality objects in their initial lifespan, is presented. By tracking the ratings given to the rewarded objects, we present a method to identify the user perceptibility, which is defined as the capability that a user can identify these objects at their early lifespan. Moreover, we investigate the behavior patterns of the perceptive users from three dimensions: User activity, correlation characteristics of user rating series and user reputation. The experimental results for the empirical networks indicate that high perceptibility users show significantly different behavior patterns with the others: Having larger degree, stronger correlation of rating series and higher reputation. Furthermore, in view of the hysteresis in finding the rewarded objects, we present a general framework for identifying the high perceptibility users based on user behavior patterns. The experimental results show that this work is helpful for deeply understanding the collective behavior patterns for online users. PMID:28704382

  6. Exploring the influence of social activity on scientific career

    NASA Astrophysics Data System (ADS)

    Xie, Zonglin; Xie, Zheng; Li, Jianping; Yang, Qian

    2018-06-01

    For researchers, does activity in academic society influence their careers? In scientometrics, the activity can be expressed through the number of collaborators and scientific careers through the number of publications and citations of authors. We provide empirical evidence from four datasets of representative journals and explore the correlations between each two of the three indices. By using a hypothetical extraction method, we divide authors into patterns which can reflect the different extent of preference for social activity, according to their contributions to the correlation between the number of collaborators and that of papers. Furthermore, we choose two of the patterns as a sociable one and an unsociable one and then compare both of the expected value and the distribution of publications and citations for authors between sociable pattern and unsociable pattern. Finally, we draw a conclusion that social activity could be favorable for authors to promote academic outcomes and obtain recognition.

  7. Identifying the perceptive users for online social systems.

    PubMed

    Liu, Jian-Guo; Liu, Xiao-Lu; Guo, Qiang; Han, Jing-Ti

    2017-01-01

    In this paper, the perceptive user, who could identify the high-quality objects in their initial lifespan, is presented. By tracking the ratings given to the rewarded objects, we present a method to identify the user perceptibility, which is defined as the capability that a user can identify these objects at their early lifespan. Moreover, we investigate the behavior patterns of the perceptive users from three dimensions: User activity, correlation characteristics of user rating series and user reputation. The experimental results for the empirical networks indicate that high perceptibility users show significantly different behavior patterns with the others: Having larger degree, stronger correlation of rating series and higher reputation. Furthermore, in view of the hysteresis in finding the rewarded objects, we present a general framework for identifying the high perceptibility users based on user behavior patterns. The experimental results show that this work is helpful for deeply understanding the collective behavior patterns for online users.

  8. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student's t-distribution.

    PubMed

    Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.

  9. Instantaneous Respiratory Estimation from Thoracic Impedance by Empirical Mode Decomposition.

    PubMed

    Wang, Fu-Tai; Chan, Hsiao-Lung; Wang, Chun-Li; Jian, Hung-Ming; Lin, Sheng-Hsiung

    2015-07-07

    Impedance plethysmography provides a way to measure respiratory activity by sensing the change of thoracic impedance caused by inspiration and expiration. This measurement imposes little pressure on the body and uses the human body as the sensor, thereby reducing the need for adjustments as body position changes and making it suitable for long-term or ambulatory monitoring. The empirical mode decomposition (EMD) can decompose a signal into several intrinsic mode functions (IMFs) that disclose nonstationary components as well as stationary components and, similarly, capture respiratory episodes from thoracic impedance. However, upper-body movements usually produce motion artifacts that are not easily removed by digital filtering. Moreover, large motion artifacts disable the EMD to decompose respiratory components. In this paper, motion artifacts are detected and replaced by the data mirrored from the prior and the posterior before EMD processing. A novel intrinsic respiratory reconstruction index that considers both global and local properties of IMFs is proposed to define respiration-related IMFs for respiration reconstruction and instantaneous respiratory estimation. Based on the experiments performing a series of static and dynamic physical activates, our results showed the proposed method had higher cross correlations between respiratory frequencies estimated from thoracic impedance and those from oronasal airflow based on small window size compared to the Fourier transform-based method.

  10. Instantaneous Respiratory Estimation from Thoracic Impedance by Empirical Mode Decomposition

    PubMed Central

    Wang, Fu-Tai; Chan, Hsiao-Lung; Wang, Chun-Li; Jian, Hung-Ming; Lin, Sheng-Hsiung

    2015-01-01

    Impedance plethysmography provides a way to measure respiratory activity by sensing the change of thoracic impedance caused by inspiration and expiration. This measurement imposes little pressure on the body and uses the human body as the sensor, thereby reducing the need for adjustments as body position changes and making it suitable for long-term or ambulatory monitoring. The empirical mode decomposition (EMD) can decompose a signal into several intrinsic mode functions (IMFs) that disclose nonstationary components as well as stationary components and, similarly, capture respiratory episodes from thoracic impedance. However, upper-body movements usually produce motion artifacts that are not easily removed by digital filtering. Moreover, large motion artifacts disable the EMD to decompose respiratory components. In this paper, motion artifacts are detected and replaced by the data mirrored from the prior and the posterior before EMD processing. A novel intrinsic respiratory reconstruction index that considers both global and local properties of IMFs is proposed to define respiration-related IMFs for respiration reconstruction and instantaneous respiratory estimation. Based on the experiments performing a series of static and dynamic physical activates, our results showed the proposed method had higher cross correlations between respiratory frequencies estimated from thoracic impedance and those from oronasal airflow based on small window size compared to the Fourier transform-based method. PMID:26198231

  11. Empirical prediction of peak pressure levels in anthropogenic impulsive noise. Part I: Airgun arrays signals.

    PubMed

    Galindo-Romero, Marta; Lippert, Tristan; Gavrilov, Alexander

    2015-12-01

    This paper presents an empirical linear equation to predict peak pressure level of anthropogenic impulsive signals based on its correlation with the sound exposure level. The regression coefficients are shown to be weakly dependent on the environmental characteristics but governed by the source type and parameters. The equation can be applied to values of the sound exposure level predicted with a numerical model, which provides a significant improvement in the prediction of the peak pressure level. Part I presents the analysis for airgun arrays signals, and Part II considers the application of the empirical equation to offshore impact piling noise.

  12. Developmental Associations between Short-Term Variability and Long-Term Changes: Intraindividual Correlation of Positive and Negative Affect in Daily Life and Cognitive Aging

    ERIC Educational Resources Information Center

    Hülür, Gizem; Hoppmann, Christiane A.; Ram, Nilam; Gerstorf, Denis

    2015-01-01

    Conceptual notions and empirical evidence suggest that the intraindividual correlation (iCorr) of positive affect (PA) and negative affect (NA) is a meaningful characteristic of affective functioning. PA and NA are typically negatively correlated within-person. Previous research has found that the iCorr of PA and NA is relatively stable over time…

  13. Limits of the memory coefficient in measuring correlated bursts

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Hiraoka, Takayuki

    2018-03-01

    Temporal inhomogeneities in event sequences of natural and social phenomena have been characterized in terms of interevent times and correlations between interevent times. The inhomogeneities of interevent times have been extensively studied, while the correlations between interevent times, often called correlated bursts, are far from being fully understood. For measuring the correlated bursts, two relevant approaches were suggested, i.e., memory coefficient and burst size distribution. Here a burst size denotes the number of events in a bursty train detected for a given time window. Empirical analyses have revealed that the larger memory coefficient tends to be associated with the heavier tail of the burst size distribution. In particular, empirical findings in human activities appear inconsistent, such that the memory coefficient is close to 0, while burst size distributions follow a power law. In order to comprehend these observations, by assuming the conditional independence between consecutive interevent times, we derive the analytical form of the memory coefficient as a function of parameters describing interevent time and burst size distributions. Our analytical result can explain the general tendency of the larger memory coefficient being associated with the heavier tail of burst size distribution. We also find that the apparently inconsistent observations in human activities are compatible with each other, indicating that the memory coefficient has limits to measure the correlated bursts.

  14. Empirical analysis of web-based user-object bipartite networks

    NASA Astrophysics Data System (ADS)

    Shang, Ming-Sheng; Lü, Linyuan; Zhang, Yi-Cheng; Zhou, Tao

    2010-05-01

    Understanding the structure and evolution of web-based user-object networks is a significant task since they play a crucial role in e-commerce nowadays. This letter reports the empirical analysis on two large-scale web sites, audioscrobbler.com and del.icio.us, where users are connected with music groups and bookmarks, respectively. The degree distributions and degree-degree correlations for both users and objects are reported. We propose a new index, named collaborative similarity, to quantify the diversity of tastes based on the collaborative selection. Accordingly, the correlation between degree and selection diversity is investigated. We report some novel phenomena well characterizing the selection mechanism of web users and outline the relevance of these phenomena to the information recommendation problem.

  15. Sexual orientation beliefs: their relationship to anti-gay attitudes and biological determinist arguments.

    PubMed

    Hegarty, P; Pratto, F

    2001-01-01

    Previous studies which have measured beliefs about sexual orientation with either a single item, or a one-dimensional scale are discussed. In the present study beliefs were observed to vary along two dimensions: the "immutability" of sexual orientation and the "fundamentality" of a categorization of persons as heterosexuals and homosexuals. While conceptually related, these two dimensions were empirically distinct on several counts. They were negatively correlated with each other. Condemning attitudes toward lesbians and gay men were correlated positively with fundamentality but negatively with immutability. Immutability, but not fundamentality, affected the assimilation of a biological determinist argument. The relationship between sexual orientation beliefs and anti-gay prejudice is discussed and suggestions for empirical studies of sexual orientation beliefs are presented.

  16. The Nature of Procrastination: A Meta-Analytic and Theoretical Review of Quintessential Self-Regulatory Failure

    ERIC Educational Resources Information Center

    Steel, Piers

    2007-01-01

    Procrastination is a prevalent and pernicious form of self-regulatory failure that is not entirely understood. Hence, the relevant conceptual, theoretical, and empirical work is reviewed, drawing upon correlational, experimental, and qualitative findings. A meta-analysis of procrastination's possible causes and effects, based on 691 correlations,…

  17. Introducing Scale Analysis by Way of a Pendulum

    ERIC Educational Resources Information Center

    Lira, Ignacio

    2007-01-01

    Empirical correlations are a practical means of providing approximate answers to problems in physics whose exact solution is otherwise difficult to obtain. The correlations relate quantities that are deemed to be important in the physical situation to which they apply, and can be derived from experimental data by means of dimensional and/or scale…

  18. Success Avoidant Motivation and Behavior; Its Development Correlates and Situational Determinants. Final Report.

    ERIC Educational Resources Information Center

    Horner, Matina S.

    This paper reports on a successful attempt to understand success avoidant motivation and behavior by the development of an empirically sophisticated scoring system of success avoidant motivation and the observation of its behavioral correlates and situational determinants. Like most of the work on achievement motivation, the study was carried out…

  19. Large-Scale Studies on the Transferability of General Problem-Solving Skills and the Pedagogic Potential of Physics

    ERIC Educational Resources Information Center

    Mashood, K. K.; Singh, Vijay A.

    2013-01-01

    Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…

  20. Correlates of the MMPI-2-RF in a College Setting

    ERIC Educational Resources Information Center

    Forbey, Johnathan D.; Lee, Tayla T. C.; Handel, Richard W.

    2010-01-01

    The current study examined empirical correlates of scores on Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; A. Tellegen & Y. S. Ben-Porath, 2008; Y. S. Ben-Porath & A. Tellegen, 2008) scales in a college setting. The MMPI-2-RF and six criterion measures (assessing anger, assertiveness, sex roles, cognitive…

  1. [Association between obesity and DNA methylation among the 7-16 year-old twins].

    PubMed

    Li, C X; Gao, Y; Gao, W J; Yu, C Q; Lyu, J; Lyu, R R; Duan, J L; Sun, Y; Guo, X H; Wang, S F; Zhou, B; Wang, G; Cao, W H; Li, L M

    2018-04-10

    Objective: On whole-genome scale, we tried to explore the correlation between obesity-related traits and DNA methylation sites, based on discordant monozygotic twin pairs. Methods: A total of 90 pairs of 6-17 year-old twins were recruited in Chaoyang district, Yanqing district and Fangshan district in Beijing in 2016. Information on twins was gathered through a self-designed questionnaire and results: from physical examination, including height, weight and waist circumference of the subjects under study. DNA methylation detection was chosen on the Illumina Human Methylation EPIC BeadChip. R 3.3.1 language was used to read the DNA methylation signal under quality control on samples and probes. Ebayes function of empirical Bayes paired moderated t -test was used to identify the differential methylated CpG sites (DMCs). VarFit function of empirical Bayes paired moderated Levene test was used to identify the differentially variables CpG sits (DVCs) in obese and normal groups. Results According to the obesity discordance criteria, we collected 23 pairs of twins (age range 7 to 16 years), including 12 male pairs. A total of 817 471 qualified CpG loci were included in the genome-wide correlation analysis. According to the significance level of FDR set as <0.05, no positive sites would meet this standard. When DMC CpG site cg05684382, with the smallest P value (1.26E-06) as on chromosome 12, the DVC CpG site cg26188191 with the smallest P value (6.44E-06) appeared in CMIP gene on chromosome 16. Conclusions: In this study, we analyzed the genome-wide DNA methylation and its correlation with obesity traits. After multiple testing corrections, no positive sites were found to have associated with obesity. However, results from the correlation analysis demonstrated sites cg05684382 (chr: 12) and cg26188191 (chr: 16) might have played a role in the development of obesity. This study provides a methodologic reference for the studies on discordance twins related problems.

  2. Non-empirical exchange-correlation parameterizations based on exact conditions from correlated orbital theory.

    PubMed

    Haiduke, Roberto Luiz A; Bartlett, Rodney J

    2018-05-14

    Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.

  3. Non-empirical exchange-correlation parameterizations based on exact conditions from correlated orbital theory

    NASA Astrophysics Data System (ADS)

    Haiduke, Roberto Luiz A.; Bartlett, Rodney J.

    2018-05-01

    Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.

  4. Bayesian Group Bridge for Bi-level Variable Selection.

    PubMed

    Mallick, Himel; Yi, Nengjun

    2017-06-01

    A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capone, D.G.; Penhale, P.A.; Oremland, R.S.

    N/sub 2/ (C/sub 2/H/sub 2/) fixation and primary production were measured in communities of Thalassia testudinum at two sites in Bimini Harbor (Bahamas). Production was determined by uptake of (/sup 14/C)NaHCO/sub 3/, by leaf growth measurements, and by applying an empirical formula based on leaf dimensions. The last two methods gave similar results but the /sup 14/C method gave higher values. Anaerobic sediment N/sub 2/ fixation supplied about 1/4 to 1/2 of the nitrogen demand for leaf production (by leaf growth method) and there was a significant correlation between N/sub 2/ fixation and CO/sub 2/ fixation rates when all componentsmore » of the communities were considered (macrophyte, phyllosphere epiphytes, and detrital leaves). N/sub 2/ fixation is important to production in Thalassia communities and the plant and its leaf epiphytes may be distinct entities in terms of nitrogen and carbon metabolism.« less

  6. An operational GLS model for hydrologic regression

    USGS Publications Warehouse

    Tasker, Gary D.; Stedinger, J.R.

    1989-01-01

    Recent Monte Carlo studies have documented the value of generalized least squares (GLS) procedures to estimate empirical relationships between streamflow statistics and physiographic basin characteristics. This paper presents a number of extensions of the GLS method that deal with realities and complexities of regional hydrologic data sets that were not addressed in the simulation studies. These extensions include: (1) a more realistic model of the underlying model errors; (2) smoothed estimates of cross correlation of flows; (3) procedures for including historical flow data; (4) diagnostic statistics describing leverage and influence for GLS regression; and (5) the formulation of a mathematical program for evaluating future gaging activities. ?? 1989.

  7. Sexual selection and sex linkage.

    PubMed

    Kirkpatrick, Mark; Hall, David W

    2004-04-01

    Some animal groups, such as birds, seem prone to extreme forms of sexual selection. One contributing factor may be sex linkage of genes affecting male displays and female preferences. Here we show that sex linkage can have substantial effects on the genetic correlation between these traits and consequently for Fisher's runaway and the good-genes mechanisms of sexual selection. Under some kinds of sex linkage (e.g. Z-linked preferences), a runaway is more likely than under autosomal inheritance, while under others (e.g., X-linked preferences and autosomal displays), the good-genes mechanism is particularly powerful. These theoretical results suggest empirical tests based on the comparative method.

  8. Surprising performance for vibrational frequencies of the distinguishable clusters with singles and doubles (DCSD) and MP2.5 approximations

    NASA Astrophysics Data System (ADS)

    Kesharwani, Manoj K.; Sylvetsky, Nitai; Martin, Jan M. L.

    2017-11-01

    We show that the DCSD (distinguishable clusters with all singles and doubles) correlation method permits the calculation of vibrational spectra at near-CCSD(T) quality but at no more than CCSD cost, and with comparatively inexpensive analytical gradients. For systems dominated by a single reference configuration, even MP2.5 is a viable alternative, at MP3 cost. MP2.5 performance for vibrational frequencies is comparable to double hybrids such as DSD-PBEP86-D3BJ, but without resorting to empirical parameters. DCSD is also quite suitable for computing zero-point vibrational energies in computational thermochemistry.

  9. Disorders without borders: current and future directions in the meta-structure of mental disorders.

    PubMed

    Carragher, Natacha; Krueger, Robert F; Eaton, Nicholas R; Slade, Tim

    2015-03-01

    Classification is the cornerstone of clinical diagnostic practice and research. However, the extant psychiatric classification systems are not well supported by research evidence. In particular, extensive comorbidity among putatively distinct disorders flags an urgent need for fundamental changes in how we conceptualize psychopathology. Over the past decade, research has coalesced on an empirically based model that suggests many common mental disorders are structured according to two correlated latent dimensions: internalizing and externalizing. We review and discuss the development of a dimensional-spectrum model which organizes mental disorders in an empirically based manner. We also touch upon changes in the DSM-5 and put forward recommendations for future research endeavors. Our review highlights substantial empirical support for the empirically based internalizing-externalizing model of psychopathology, which provides a parsimonious means of addressing comorbidity. As future research goals, we suggest that the field would benefit from: expanding the meta-structure of psychopathology to include additional disorders, development of empirically based thresholds, inclusion of a developmental perspective, and intertwining genomic and neuroscience dimensions with the empirical structure of psychopathology.

  10. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  11. Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Abu-Alqumsan, Mohammad; Peer, Angelika

    2016-06-01

    Objective. Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. Approach. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. Main results. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Significance. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.

  12. Social inequality, lifestyles and health - a non-linear canonical correlation analysis based on the approach of Pierre Bourdieu.

    PubMed

    Grosse Frie, Kirstin; Janssen, Christian

    2009-01-01

    Based on the theoretical and empirical approach of Pierre Bourdieu, a multivariate non-linear method is introduced as an alternative way to analyse the complex relationships between social determinants and health. The analysis is based on face-to-face interviews with 695 randomly selected respondents aged 30 to 59. Variables regarding socio-economic status, life circumstances, lifestyles, health-related behaviour and health were chosen for the analysis. In order to determine whether the respondents can be differentiated and described based on these variables, a non-linear canonical correlation analysis (OVERALS) was performed. The results can be described on three dimensions; Eigenvalues add up to the fit of 1.444, which can be interpreted as approximately 50 % of explained variance. The three-dimensional space illustrates correspondences between variables and provides a framework for interpretation based on latent dimensions, which can be described by age, education, income and gender. Using non-linear canonical correlation analysis, health characteristics can be analysed in conjunction with socio-economic conditions and lifestyles. Based on Bourdieus theoretical approach, the complex correlations between these variables can be more substantially interpreted and presented.

  13. Causes of coal-miner absenteeism. Information Circular/1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, R.H.; Randolph, R.F.

    The Bureau of Mines report describes several significant problems associated with absenteeism among underground coal miners. The vast empirical literature on employee absenteeism is reviewed, and a conceptual model of the factors that cause absenteeism among miners is presented. Portions of the model were empirically tested by performing correlational and multiple regression analyses on data collected from a group of 64 underground coal miners. The results of these tests are presented and discussed.

  14. GPS-Derived Precipitable Water Compared with the Air Force Weather Agency’s MM5 Model Output

    DTIC Science & Technology

    2002-03-26

    and less then 100 sensors are available throughout Europe . While the receiver density is currently comparable to the upper-air sounding network...profiles from 38 upper air sites throughout Europe . Based on these empirical formulae and simplifications, Bevis (1992) has determined that the error...Alaska using Bevis’ (1992) empirical correlation based on 8718 radiosonde calculations over 2 years. Other studies have been conducted in Europe and

  15. Perceived sexual harassment at work: meta-analysis and structural model of antecedents and consequences.

    PubMed

    Topa Cantisano, Gabriela; Morales Domínguez, J F; Depolo, Marco

    2008-05-01

    Although sexual harassment has been extensively studied, empirical research has not led to firm conclusions about its antecedents and consequences, both at the personal and organizational level. An extensive literature search yielded 42 empirical studies with 60 samples. The matrix correlation obtained through meta-analytic techniques was used to test a structural equation model. Results supported the hypotheses regarding organizational environmental factors as main predictors of harassment.

  16. Do foreign exchange and equity markets co-move in Latin American region? Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Bashir, Usman; Yu, Yugang; Hussain, Muntazir; Zebende, Gilney F.

    2016-11-01

    This paper investigates the dynamics of the relationship between foreign exchange markets and stock markets through time varying co-movements. In this sense, we analyzed the time series monthly of Latin American countries for the period from 1991 to 2015. Furthermore, we apply Granger causality to verify the direction of causality between foreign exchange and stock market and detrended cross-correlation approach (ρDCCA) for any co-movements at different time scales. Our empirical results suggest a positive cross correlation between exchange rate and stock price for all Latin American countries. The findings reveal two clear patterns of correlation. First, Brazil and Argentina have positive correlation in both short and long time frames. Second, the remaining countries are negatively correlated in shorter time scale, gradually moving to positive. This paper contributes to the field in three ways. First, we verified the co-movements of exchange rate and stock prices that were rarely discussed in previous empirical studies. Second, ρDCCA coefficient is a robust and powerful methodology to measure the cross correlation when dealing with non stationarity of time series. Third, most of the studies employed one or two time scales using co-integration and vector autoregressive approaches. Not much is known about the co-movements at varying time scales between foreign exchange and stock markets. ρDCCA coefficient facilitates the understanding of its explanatory depth.

  17. A novel method to identify pathways associated with renal cell carcinoma based on a gene co-expression network

    PubMed Central

    RUAN, XIYUN; LI, HONGYUN; LIU, BO; CHEN, JIE; ZHANG, SHIBAO; SUN, ZEQIANG; LIU, SHUANGQING; SUN, FAHAI; LIU, QINGYONG

    2015-01-01

    The aim of the present study was to develop a novel method for identifying pathways associated with renal cell carcinoma (RCC) based on a gene co-expression network. A framework was established where a co-expression network was derived from the database as well as various co-expression approaches. First, the backbone of the network based on differentially expressed (DE) genes between RCC patients and normal controls was constructed by the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database. The differentially co-expressed links were detected by Pearson’s correlation, the empirical Bayesian (EB) approach and Weighted Gene Co-expression Network Analysis (WGCNA). The co-expressed gene pairs were merged by a rank-based algorithm. We obtained 842; 371; 2,883 and 1,595 co-expressed gene pairs from the co-expression networks of the STRING database, Pearson’s correlation EB method and WGCNA, respectively. Two hundred and eighty-one differentially co-expressed (DC) gene pairs were obtained from the merged network using this novel method. Pathway enrichment analysis based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and the network enrichment analysis (NEA) method were performed to verify feasibility of the merged method. Results of the KEGG and NEA pathway analyses showed that the network was associated with RCC. The suggested method was computationally efficient to identify pathways associated with RCC and has been identified as a useful complement to traditional co-expression analysis. PMID:26058425

  18. Job Stress and Presenteeism among Chinese Healthcare Workers: The Mediating Effects of Affective Commitment

    PubMed Central

    Ma, Mingxu; Li, Yaxin; Tian, Huilin; Deng, Jianwei

    2017-01-01

    Background: Presenteeism affects the performance of healthcare workers. This study examined associations between job stress, affective commitment, and presenteeism among healthcare workers. Methods: To investigate the relationship between job stress, affective commitment, and presenteeism, structural equation modeling was used to analyze a sample of 1392 healthcare workers from 11 Class A tertiary hospitals in eastern, central, and western China. The mediating effect of affective commitment on the association between job stress and presenteeism was examined with the Sobel test. Results: Job stress was high and the level of presenteeism was moderate among healthcare workers. Challenge stress and hindrance stress were strongly correlated (β = 0.62; p < 0.05). Affective commitment was significantly and directly inversely correlated with presenteeism (β = −0.27; p < 0.001). Challenge stress was significantly positively correlated with affective commitment (β = 0.15; p < 0.001) but not with presenteeism. Hindrance stress was significantly inversely correlated with affective commitment (β = −0.40; p < 0.001) but was significantly positively correlated with presenteeism (β = 0.26; p < 0.001). Conclusions: This study provides important empirical data on presenteeism among healthcare workers. Presenteeism can be addressed by increasing affective commitment and challenge stress and by limiting hindrance stress among healthcare workers in China. PMID:28850081

  19. Bias, precision and statistical power of analysis of covariance in the analysis of randomized trials with baseline imbalance: a simulation study

    PubMed Central

    2014-01-01

    Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304

  20. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  1. Understanding similarity of groundwater systems with empirical copulas

    NASA Astrophysics Data System (ADS)

    Haaf, Ezra; Kumar, Rohini; Samaniego, Luis; Barthel, Roland

    2016-04-01

    Within the classification framework for groundwater systems that aims for identifying similarity of hydrogeological systems and transferring information from a well-observed to an ungauged system (Haaf and Barthel, 2015; Haaf and Barthel, 2016), we propose a copula-based method for describing groundwater-systems similarity. Copulas are an emerging method in hydrological sciences that make it possible to model the dependence structure of two groundwater level time series, independently of the effects of their marginal distributions. This study is based on Samaniego et al. (2010), which described an approach calculating dissimilarity measures from bivariate empirical copula densities of streamflow time series. Subsequently, streamflow is predicted in ungauged basins by transferring properties from similar catchments. The proposed approach is innovative because copula-based similarity has not yet been applied to groundwater systems. Here we estimate the pairwise dependence structure of 600 wells in Southern Germany using 10 years of weekly groundwater level observations. Based on these empirical copulas, dissimilarity measures are estimated, such as the copula's lower- and upper corner cumulated probability, copula-based Spearman's rank correlation - as proposed by Samaniego et al. (2010). For the characterization of groundwater systems, copula-based metrics are compared with dissimilarities obtained from precipitation signals corresponding to the presumed area of influence of each groundwater well. This promising approach provides a new tool for advancing similarity-based classification of groundwater system dynamics. Haaf, E., Barthel, R., 2015. Methods for assessing hydrogeological similarity and for classification of groundwater systems on the regional scale, EGU General Assembly 2015, Vienna, Austria. Haaf, E., Barthel, R., 2016. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs EGU General Assembly 2016, Vienna, Austria. Samaniego, L., Bardossy, A., Kumar, R., 2010. Streamflow prediction in ungauged catchments using copula-based dissimilarity measures. Water Resources Research, 46. DOI:10.1029/2008wr007695

  2. From moonlight to movement and synchronized randomness: Fourier and wavelet analyses of animal location time series data

    PubMed Central

    Polansky, Leo; Wittemyer, George; Cross, Paul C.; Tambling, Craig J.; Getz, Wayne M.

    2011-01-01

    High-resolution animal location data are increasingly available, requiring analytical approaches and statistical tools that can accommodate the temporal structure and transient dynamics (non-stationarity) inherent in natural systems. Traditional analyses often assume uncorrelated or weakly correlated temporal structure in the velocity (net displacement) time series constructed using sequential location data. We propose that frequency and time–frequency domain methods, embodied by Fourier and wavelet transforms, can serve as useful probes in early investigations of animal movement data, stimulating new ecological insight and questions. We introduce a novel movement model with time-varying parameters to study these methods in an animal movement context. Simulation studies show that the spectral signature given by these methods provides a useful approach for statistically detecting and characterizing temporal dependency in animal movement data. In addition, our simulations provide a connection between the spectral signatures observed in empirical data with null hypotheses about expected animal activity. Our analyses also show that there is not a specific one-to-one relationship between the spectral signatures and behavior type and that departures from the anticipated signatures are also informative. Box plots of net displacement arranged by time of day and conditioned on common spectral properties can help interpret the spectral signatures of empirical data. The first case study is based on the movement trajectory of a lion (Panthera leo) that shows several characteristic daily activity sequences, including an active–rest cycle that is correlated with moonlight brightness. A second example based on six pairs of African buffalo (Syncerus caffer) illustrates the use of wavelet coherency to show that their movements synchronize when they are within ∼1 km of each other, even when individual movement was best described as an uncorrelated random walk, providing an important spatial baseline of movement synchrony and suggesting that local behavioral cues play a strong role in driving movement patterns. We conclude with a discussion about the role these methods may have in guiding appropriately flexible probabilistic models connecting movement with biotic and abiotic covariates. PMID:20503882

  3. Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data.

    PubMed

    Nielsen, Allan Aasbjerg

    2002-01-01

    This paper describes two- and multiset canonical correlations analysis (CCA) for data fusion, multisource, multiset, or multitemporal exploratory data analysis. These techniques transform multivariate multiset data into new orthogonal variables called canonical variates (CVs) which, when applied in remote sensing, exhibit ever-decreasing similarity (as expressed by correlation measures) over sets consisting of 1) spectral variables at fixed points in time (R-mode analysis), or 2) temporal variables with fixed wavelengths (T-mode analysis). The CVs are invariant to linear and affine transformations of the original variables within sets which means, for example, that the R-mode CVs are insensitive to changes over time in offset and gain in a measuring device. In a case study, CVs are calculated from Landsat Thematic Mapper (TM) data with six spectral bands over six consecutive years. Both Rand T-mode CVs clearly exhibit the desired characteristic: they show maximum similarity for the low-order canonical variates and minimum similarity for the high-order canonical variates. These characteristics are seen both visually and in objective measures. The results from the multiset CCA R- and T-mode analyses are very different. This difference is ascribed to the noise structure in the data. The CCA methods are related to partial least squares (PLS) methods. This paper very briefly describes multiset CCA-based multiset PLS. Also, the CCA methods can be applied as multivariate extensions to empirical orthogonal functions (EOF) techniques. Multiset CCA is well-suited for inclusion in geographical information systems (GIS).

  4. IDF relationships using bivariate copula for storm events in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Ariff, N. M.; Jemain, A. A.; Ibrahim, K.; Wan Zin, W. Z.

    2012-11-01

    SummaryIntensity-duration-frequency (IDF) curves are used in many hydrologic designs for the purpose of water managements and flood preventions. The IDF curves available in Malaysia are those obtained from univariate analysis approach which only considers the intensity of rainfalls at fixed time intervals. As several rainfall variables are correlated with each other such as intensity and duration, this paper aims to derive IDF points for storm events in Peninsular Malaysia by means of bivariate frequency analysis. This is achieved through utilizing the relationship between storm intensities and durations using the copula method. Four types of copulas; namely the Ali-Mikhail-Haq (AMH), Frank, Gaussian and Farlie-Gumbel-Morgenstern (FGM) copulas are considered because the correlation between storm intensity, I, and duration, D, are negative and these copulas are appropriate when the relationship between the variables are negative. The correlations are attained by means of Kendall's τ estimation. The analysis was performed on twenty rainfall stations with hourly data across Peninsular Malaysia. Using Akaike's Information Criteria (AIC) for testing goodness-of-fit, both Frank and Gaussian copulas are found to be suitable to represent the relationship between I and D. The IDF points found by the copula method are compared to the IDF curves yielded based on the typical IDF empirical formula of the univariate approach. This study indicates that storm intensities obtained from both methods are in agreement with each other for any given storm duration and for various return periods.

  5. Content, Social, and Metacognitive Statements: An Empirical Study Comparing Human-Human and Human-Computer Tutorial Dialogue

    DTIC Science & Technology

    2010-01-01

    for each participant using the formula gain = ( posttest − pretest )/(1− pretest ). 6.2 Content-Learning Correlations The summary of language statistics...differences also affect which factors are correlated with learning gain and user satisfaction. We argue that ITS designers should pay particular...factors are correlated with learning gain and user satisfaction. We argue that ITS designers should pay particular attention to strategies for dealing

  6. Estimating air chemical emissions from research activities using stack measurement data.

    PubMed

    Ballinger, Marcel Y; Duchsherer, Cheryl J; Woodruff, Rodger K; Larson, Timothy V

    2013-03-01

    Current methods of estimating air emissions from research and development (R&D) activities use a wide range of release fractions or emission factors with bases ranging from empirical to semi-empirical. Although considered conservative, the uncertainties and confidence levels of the existing methods have not been reported. Chemical emissions were estimated from sampling data taken from four research facilities over 10 years. The approach was to use a Monte Carlo technique to create distributions of annual emission estimates for target compounds detected in source test samples. Distributions were created for each year and building sampled for compounds with sufficient detection frequency to qualify for the analysis. The results using the Monte Carlo technique without applying a filter to remove negative emission values showed almost all distributions spanning zero, and 40% of the distributions having a negative mean. This indicates that emissions are so low as to be indistinguishable from building background. Application of a filter to allow only positive values in the distribution provided a more realistic value for emissions and increased the distribution mean by an average of 16%. Release fractions were calculated by dividing the emission estimates by a building chemical inventory quantity. Two variations were used for this quantity: chemical usage, and chemical usage plus one-half standing inventory. Filters were applied so that only release fraction values from zero to one were included in the resulting distributions. Release fractions had a wide range among chemicals and among data sets for different buildings and/or years for a given chemical. Regressions of release fractions to molecular weight and vapor pressure showed weak correlations. Similarly, regressions of mean emissions to chemical usage, chemical inventory, molecular weight, and vapor pressure also gave weak correlations. These results highlight the difficulties in estimating emissions from R&D facilities using chemical inventory data. Air emissions from research operations are difficult to estimate because of the changing nature of research processes and the small quantity and wide variety of chemicals used. Analysis of stack measurements taken over multiple facilities and a 10-year period using a Monte Carlo technique provided a method to quantify the low emissions and to estimate release fractions based on chemical inventories. The variation in release fractions did not correlate well with factors investigated, confirming the complexities in estimating R&D emissions.

  7. Empirical comparison of local structural similarity indices for collaborative-filtering-based recommender systems

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Ming; Shang, Ming-Sheng; Zeng, Wei; Chen, Yong; Lü, Linyuan

    2010-08-01

    Collaborative filtering is one of the most successful recommendation techniques, which can effectively predict the possible future likes of users based on their past preferences. The key problem of this method is how to define the similarity between users. A standard approach is using the correlation between the ratings that two users give to a set of objects, such as Cosine index and Pearson correlation coefficient. However, the costs of computing this kind of indices are relatively high, and thus it is impossible to be applied in the huge-size systems. To solve this problem, in this paper, we introduce six local-structure-based similarity indices and compare their performances with the above two benchmark indices. Experimental results on two data sets demonstrate that the structure-based similarity indices overall outperform the Pearson correlation coefficient. When the data is dense, the structure-based indices can perform competitively good as Cosine index, while with lower computational complexity. Furthermore, when the data is sparse, the structure-based indices give even better results than Cosine index.

  8. Spatial and spectral interpolation of ground-motion intensity measure observations

    USGS Publications Warehouse

    Worden, Charles; Thompson, Eric M.; Baker, Jack W.; Bradley, Brendon A.; Luco, Nicolas; Wilson, David

    2018-01-01

    Following a significant earthquake, ground‐motion observations are available for a limited set of locations and intensity measures (IMs). Typically, however, it is desirable to know the ground motions for additional IMs and at locations where observations are unavailable. Various interpolation methods are available, but because IMs or their logarithms are normally distributed, spatially correlated, and correlated with each other at a given location, it is possible to apply the conditional multivariate normal (MVN) distribution to the problem of estimating unobserved IMs. In this article, we review the MVN and its application to general estimation problems, and then apply the MVN to the specific problem of ground‐motion IM interpolation. In particular, we present (1) a formulation of the MVN for the simultaneous interpolation of IMs across space and IM type (most commonly, spectral response at different oscillator periods) and (2) the inclusion of uncertain observation data in the MVN formulation. These techniques, in combination with modern empirical ground‐motion models and correlation functions, provide a flexible framework for estimating a variety of IMs at arbitrary locations.

  9. Revisiting crash spatial heterogeneity: A Bayesian spatially varying coefficients approach.

    PubMed

    Xu, Pengpeng; Huang, Helai; Dong, Ni; Wong, S C

    2017-01-01

    This study was performed to investigate the spatially varying relationships between crash frequency and related risk factors. A Bayesian spatially varying coefficients model was elaborately introduced as a methodological alternative to simultaneously account for the unstructured and spatially structured heterogeneity of the regression coefficients in predicting crash frequencies. The proposed method was appealing in that the parameters were modeled via a conditional autoregressive prior distribution, which involved a single set of random effects and a spatial correlation parameter with extreme values corresponding to pure unstructured or pure spatially correlated random effects. A case study using a three-year crash dataset from the Hillsborough County, Florida, was conducted to illustrate the proposed model. Empirical analysis confirmed the presence of both unstructured and spatially correlated variations in the effects of contributory factors on severe crash occurrences. The findings also suggested that ignoring spatially structured heterogeneity may result in biased parameter estimates and incorrect inferences, while assuming the regression coefficients to be spatially clustered only is probably subject to the issue of over-smoothness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Testing homogeneity of proportion ratios for stratified correlated bilateral data in two-arm randomized clinical trials.

    PubMed

    Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai

    2014-11-10

    Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Agro-ecology, household economics and malaria in Uganda: empirical correlations between agricultural and health outcomes

    PubMed Central

    2014-01-01

    Background This paper establishes empirical evidence relating the agriculture and health sectors in Uganda. The analysis explores linkages between agricultural management, malaria and implications for improving community health outcomes in rural Uganda. The goal of this exploratory work is to expand the evidence-base for collaboration between the agricultural and health sectors in Uganda. Methods The paper presents an analysis of data from the 2006 Uganda National Household Survey using a parametric multivariate Two-Limit Tobit model to identify correlations between agro-ecological variables including geographically joined daily seasonal precipitation records and household level malaria risk. The analysis of agricultural and environmental factors as they affect household malaria rates, disaggregated by age-group, is inspired by a complimentary review of existing agricultural malaria literature indicating a gap in evidence with respect to agricultural management as a form of malaria vector management. Crop choices and agricultural management practices may contribute to vector control through the simultaneous effects of reducing malaria transmission, improving housing and nutrition through income gains, and reducing insecticide resistance in both malaria vectors and agricultural pests. Results The econometric results show the existence of statistically significant correlations between crops, such as sweet potatoes/yams, beans, millet and sorghum, with household malaria risk. Local environmental factors are also influential- daily maximum temperature is negatively correlated with malaria, while daily minimum temperature is positively correlated with malaria, confirming trends in the broader literature are applicable to the Ugandan context. Conclusions Although not necessarily causative, the findings provide sufficient evidence to warrant purposefully designed work to test for agriculture health causation in vector management. A key constraint to modeling the agricultural basis of malaria transmission is the lack of data integrating both the health and agricultural information necessary to satisfy the differing methodologies used by the two sectors. A national platform for collaboration between the agricultural and health sectors could help align programs to achieve better measurements of agricultural interactions with vector reproduction and evaluate the potential for agricultural policy and programs to support rural malaria control. PMID:24990158

  12. Methodology for the study of the boiling crisis in a nuclear fuel bundle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crecy, F. de; Juhel, D.

    1995-09-01

    The boiling crisis is one of the phenoumena limiting the available power from a nuclear power plant. It has been widely studied for decades, and numerous data, models, correlations or tables are now available in the literature. If we now try to obtain a general view of previous work in this field, we may note that there are several ways of tackling the subject. The mechanistic models try to model the two-phase flow topology and the interaction between different sublayers, and must be validated by comparison with basic experiments, such as DEBORA, where we try to obtain some detailed informationsmore » on the two-phase flow pattern in a pure and simple geometry. This allows us to obtain better knowledge of the so-called {open_quotes}intrinsic effect{close_quotes}. These models are not yet acceptable for nuclear use. As the geometry of the rod bundles and grids has a tremendous importance for the Critical Heat Flux (CHF), it is mandatory to have more precise results for a given fuel rod bundle in a restricted range of parameters: this leads to the empirical approach, using empirical CHF predictors (tables, correlations, splines, etc...). One of the key points of such a method is the obtaining local thermohydraulic values, that is to say the evaluation of the so-called {open_quotes}mixing effect{close_quotes}. This is done by a subchannel analysis code or equivalent, which can be qualified on two kinds of experiments: overall flow measurements in a subchannel, such as HYDROMEL in single-phase flow or GRAZIELLA in two-phase flow, or detailed measurements inside a subchannel, such as AGATE. Nevertheless, the final qualification of a specific nuclear fuel, i.e. the synthesis of these mechanistic and empirical approaches, intrinsic and mixing effects, etc..., must be achieved on a global test such as OMEGA. This is the strategy used in France by CEA and its partners FRAMATOME and EdF.« less

  13. Chronic Fatigue Syndrome and Myalgic Encephalomyelitis: Toward An Empirical Case Definition

    PubMed Central

    Jason, Leonard A.; Kot, Bobby; Sunnquist, Madison; Brown, Abigail; Evans, Meredyth; Jantke, Rachel; Williams, Yolonda; Furst, Jacob; Vernon, Suzanne D.

    2015-01-01

    Current case definitions of Myalgic Encephalomyelitis (ME) and chronic fatigue syndrome (CFS) have been based on consensus methods, but empirical methods could be used to identify core symptoms and thereby improve the reliability. In the present study, several methods (i.e., continuous scores of symptoms, theoretically and empirically derived cut off scores of symptoms) were used to identify core symptoms best differentiating patients from controls. In addition, data mining with decision trees was conducted. Our study found a small number of core symptoms that have good sensitivity and specificity, and these included fatigue, post-exertional malaise, a neurocognitive symptom, and unrefreshing sleep. Outcomes from these analyses suggest that using empirically selected symptoms can help guide the creation of a more reliable case definition. PMID:26029488

  14. Developing Empirical Lightning Cessation Forecast Guidance for the Cape Canaveral Air Force Station and Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Stano, Geoffrey T.; Fuelberg, Henry E.; Roeder, William P.

    2010-01-01

    This research addresses the 45th Weather Squadron's (45WS) need for improved guidance regarding lightning cessation at Cape Canaveral Air Force Station and Kennedy Space Center (KSC). KSC's Lightning Detection and Ranging (LDAR) network was the primary observational tool to investigate both cloud-to-ground and intracloud lightning. Five statistical and empirical schemes were created from LDAR, sounding, and radar parameters derived from 116 storms. Four of the five schemes were unsuitable for operational use since lightning advisories would be canceled prematurely, leading to safety risks to personnel. These include a correlation and regression tree analysis, three variants of multiple linear regression, event time trending, and the time delay between the greatest height of the maximum dBZ value to the last flash. These schemes failed to adequately forecast the maximum interval, the greatest time between any two flashes in the storm. The majority of storms had a maximum interval less than 10 min, which biased the schemes toward small values. Success was achieved with the percentile method (PM) by separating the maximum interval into percentiles for the 100 dependent storms.

  15. An empirically derived short form of the Hypoglycaemia Fear Survey II.

    PubMed

    Grabman, J; Vajda Bailey, K; Schmidt, K; Cariou, B; Vaur, L; Madani, S; Cox, D; Gonder-Frederick, L

    2017-04-01

    To develop an empirically derived short version of the Hypoglycaemia Fear Survey II that still accurately measures fear of hypoglycaemia. Item response theory methods were used to generate an 11-item version of the Hypoglycaemia Fear Survey from a sample of 487 people with Type 1 or Type 2 diabetes mellitus. Subsequently, this scale was tested on a sample of 2718 people with Type 1 or insulin-treated Type 2 diabetes taking part in DIALOG, a large observational prospective study of hypoglycaemia in France. The short form of the Hypoglycaemia Fear Survey II matched the factor structure of the long form for respondents with both Type 1 and Type 2 diabetes, while maintaining adequate internal reliability on the total scale and all three subscales. The two forms were highly correlated on both the total scale and each subscale (Pearson's R > 0.89). The short form of the Hypoglycaemia Fear Survey II is an important first step in more efficiently measuring fear of hypoglycaemia. Future prospective studies are needed for further validity testing and exploring the survey's applicability to different populations. © 2016 Diabetes UK.

  16. An empirical analysis of strategy implementation process and performance of construction companies

    NASA Astrophysics Data System (ADS)

    Zaidi, F. I.; Zawawi, E. M. A.; Nordin, R. M.; Ahnuar, E. M.

    2018-02-01

    Strategy implementation is known as action stage where it is to be considered as the most difficult stage in strategic planning. Strategy implementation can influence the whole texture of a company including its performance. The aim of this research is to provide the empirical relationship between strategy implementation process and performance of construction companies. This research establishes the strategy implementation process and how it influences the performance of construction companies. This research used quantitative method approached via questionnaire survey. Respondents were G7 construction companies in Klang Valley, Selangor. Pearson correlation analysis indicate a strong positive relationship between strategy implementation process and construction companies’ performance. The most importance part of strategy implementation process is to provide sufficient training for employees which directly influence the construction companies’ profit growth and employees’ growth. This research results will benefit top management in the construction companies to conduct strategy implementation in their companies. This research may not reflect the whole construction industry in Malaysia. Future research may be resumed to small and medium grades contractors and perhaps in other areas in Malaysia.

  17. COUSCOus: improved protein contact prediction using an empirical Bayes covariance estimator.

    PubMed

    Rawi, Reda; Mall, Raghvendra; Kunji, Khalid; El Anbari, Mohammed; Aupetit, Michael; Ullah, Ehsan; Bensmail, Halima

    2016-12-15

    The post-genomic era with its wealth of sequences gave rise to a broad range of protein residue-residue contact detecting methods. Although various coevolution methods such as PSICOV, DCA and plmDCA provide correct contact predictions, they do not completely overlap. Hence, new approaches and improvements of existing methods are needed to motivate further development and progress in the field. We present a new contact detecting method, COUSCOus, by combining the best shrinkage approach, the empirical Bayes covariance estimator and GLasso. Using the original PSICOV benchmark dataset, COUSCOus achieves mean accuracies of 0.74, 0.62 and 0.55 for the top L/10 predicted long, medium and short range contacts, respectively. In addition, COUSCOus attains mean areas under the precision-recall curves of 0.25, 0.29 and 0.30 for long, medium and short contacts and outperforms PSICOV. We also observed that COUSCOus outperforms PSICOV w.r.t. Matthew's correlation coefficient criterion on full list of residue contacts. Furthermore, COUSCOus achieves on average 10% more gain in prediction accuracy compared to PSICOV on an independent test set composed of CASP11 protein targets. Finally, we showed that when using a simple random forest meta-classifier, by combining contact detecting techniques and sequence derived features, PSICOV predictions should be replaced by the more accurate COUSCOus predictions. We conclude that the consideration of superior covariance shrinkage approaches will boost several research fields that apply the GLasso procedure, amongst the presented one of residue-residue contact prediction as well as fields such as gene network reconstruction.

  18. Spectroscopic determination of leaf biochemistry using band-depth analysis of absorption features and stepwise multiple linear regression

    USGS Publications Warehouse

    Kokaly, R.F.; Clark, R.N.

    1999-01-01

    We develop a new method for estimating the biochemistry of plant material using spectroscopy. Normalized band depths calculated from the continuum-removed reflectance spectra of dried and ground leaves were used to estimate their concentrations of nitrogen, lignin, and cellulose. Stepwise multiple linear regression was used to select wavelengths in the broad absorption features centered at 1.73 ??m, 2.10 ??m, and 2.30 ??m that were highly correlated with the chemistry of samples from eastern U.S. forests. Band depths of absorption features at these wavelengths were found to also be highly correlated with the chemistry of four other sites. A subset of data from the eastern U.S. forest sites was used to derive linear equations that were applied to the remaining data to successfully estimate their nitrogen, lignin, and cellulose concentrations. Correlations were highest for nitrogen (R2 from 0.75 to 0.94). The consistent results indicate the possibility of establishing a single equation capable of estimating the chemical concentrations in a wide variety of species from the reflectance spectra of dried leaves. The extension of this method to remote sensing was investigated. The effects of leaf water content, sensor signal-to-noise and bandpass, atmospheric effects, and background soil exposure were examined. Leaf water was found to be the greatest challenge to extending this empirical method to the analysis of fresh whole leaves and complete vegetation canopies. The influence of leaf water on reflectance spectra must be removed to within 10%. Other effects were reduced by continuum removal and normalization of band depths. If the effects of leaf water can be compensated for, it might be possible to extend this method to remote sensing data acquired by imaging spectrometers to give estimates of nitrogen, lignin, and cellulose concentrations over large areas for use in ecosystem studies.We develop a new method for estimating the biochemistry of plant material using spectroscopy. Normalized band depths calculated from the continuum-removed reflectance spectra of dried and ground leaves were used to estimate their concentrations of nitrogen, lignin, and cellulose. Stepwise multiple linear regression was used to select wavelengths in the broad absorption features centered at 1.73 ??m, 2.10 ??m, and 2.301 ??m that were highly correlated with the chemistry of samples from eastern U.S. forests. Band depths of absorption features at these wavelengths were found to also be highly correlated with the chemistry of four other sites. A subset of data from the eastern U.S. forest sites was used to derive linear equations that were applied to the remaining data to successfully estimate their nitrogen, lignin, and cellulose concentrations. Correlations were highest for nitrogen (R2 from 0.75 to 0.94). The consistent results indicate the possibility of establishing a single equation capable of estimating the chemical concentrations in a wide variety of species from the reflectance spectra of dried leaves. The extension of this method to remote sensing was investigated. The effects of leaf water content, sensor signal-to-noise and bandpass, atmospheric effects, and background soil exposure were examined. Leaf water was found to be the greatest challenge to extending this empirical method to the analysis of fresh whole leaves and complete vegetation canopies. The influence of leaf water on reflectance spectra must be removed to within 10%. Other effects were reduced by continuum removal and normalization of band depths. If the effects of leaf water can be compensated for, it might be possible to extend this method to remote sensing data acquired by imaging spectrometers to give estimates of nitrogen, lignin, and cellulose concentrations over large areas for use in ecosystem studies.

  19. Childhood Traumatic Grief: A Multi-Site Empirical Examination of the Construct and Its Correlates

    ERIC Educational Resources Information Center

    Brown, Elissa J.; Amaya-Jackson, Lisa; Cohen, Judith; Handel, Stephanie; De Bocanegra, Heike Thiel; Zatta, Eileen; Goodman, Robin F.; Mannarino, Anthony

    2008-01-01

    This study evaluated the construct of childhood traumatic grief (CTG) and its correlates through a multi-site assessment of 132 bereaved children and adolescents. Youth completed a new measure of the characteristics, attributions, and reactions to exposure to death (CARED), as well as measures of CTG, posttraumatic stress disorder (PTSD),…

  20. Prevalence and Socio-Demographic Correlates of Psychological Distress among Students at an Australian University

    ERIC Educational Resources Information Center

    Larcombe, Wendy; Finch, Sue; Sore, Rachel; Murray, Christina M.; Kentish, Sandra; Mulder, Raoul A.; Lee-Stecum, Parshia; Baik, Chi; Tokatlidis, Orania; Williams, David A.

    2016-01-01

    This research contributes to the empirical literature on university student mental well-being by investigating the prevalence and socio-demographic correlates of severe levels of psychological distress. More than 5000 students at a metropolitan Australian university participated in an anonymous online survey in 2013 that included the short form of…

  1. Correlates of Conduct Problems and Depression Comorbidity in Elementary School Boys and Girls Receiving Special Educational Services

    ERIC Educational Resources Information Center

    Poirier, Martine; Déry, Michèle; Toupin, Jean; Verlaan, Pierrette; Lemelin, Jean-Pascal; Jagiellowicz, Jadzia

    2015-01-01

    There is limited empirical research on the correlates of conduct problems (CP) and depression comorbidity during childhood. This study investigated 479 elementary school children (48.2% girls). It compared children with comorbidity to children with CP only, depression only, and control children on individual, academic, social, and family…

  2. Visual Skills and Chinese Reading Acquisition: A Meta-Analysis of Correlation Evidence

    ERIC Educational Resources Information Center

    Yang, Ling-Yan; Guo, Jian-Peng; Richman, Lynn C.; Schmidt, Frank L.; Gerken, Kathryn C.; Ding, Yi

    2013-01-01

    This paper used meta-analysis to synthesize the relation between visual skills and Chinese reading acquisition based on the empirical results from 34 studies published from 1991 to 2011. We obtained 234 correlation coefficients from 64 independent samples, with a total of 5,395 participants. The meta-analysis revealed that visual skills as a…

  3. Exponential Correlation of IQ and the Wealth of Nations

    ERIC Educational Resources Information Center

    Dickerson, Richard E.

    2006-01-01

    Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…

  4. Correlates of parent-youth discordance about youth-witnessed violence: a brief report.

    PubMed

    Lewis, Terri; Thompson, Richard; Kotch, Jonathan B; Proctor, Laura J; Litrownik, Alan J; English, Diana J; Runyan, Desmond K; Wiley, Tisha R; Dubowitz, Howard

    2013-01-01

    Studies have consistently demonstrated a lack of agreement between youth and parent reports regarding youth-witnessed violence (YWV). However, little empirical investigation has been conducted on the correlates of disagreement. Concordance between youth and parents about YWV was examined in 766 parent-youth dyads from the Longitudinal Studies of Child Abuse and Neglect (LONGSCAN). Results showed that significantly more youth (42%) than parents (15%) reported YWV. Among the dyads in which at least one informant reported YWV (N = 344), we assessed whether youth delinquency, parental monitoring, parent-child relationship quality, history of child maltreatment, income, and parental depression were predictive of parent-youth concordance. Findings indicated that youth engagement in delinquent activities was higher in the groups in which the youth reported violence exposure. More empirical study is needed to assess correlates of agreement in high-risk youth to better inform associations found between exposures and outcomes as well as practice and policy for violence exposed youth.

  5. Improving Evapotranspiration Estimates Using Multi-Platform Remote Sensing

    NASA Astrophysics Data System (ADS)

    Knipper, Kyle; Hogue, Terri; Franz, Kristie; Scott, Russell

    2016-04-01

    Understanding the linkages between energy and water cycles through evapotranspiration (ET) is uniquely challenging given its dependence on a range of climatological parameters and surface/atmospheric heterogeneity. A number of methods have been developed to estimate ET either from primarily remote-sensing observations, in-situ measurements, or a combination of the two. However, the scale of many of these methods may be too large to provide needed information about the spatial and temporal variability of ET that can occur over regions with acute or chronic land cover change and precipitation driven fluxes. The current study aims to improve the spatial and temporal variability of ET utilizing only satellite-based observations by incorporating a potential evapotranspiration (PET) methodology with satellite-based down-scaled soil moisture estimates in southern Arizona, USA. Initially, soil moisture estimates from AMSR2 and SMOS are downscaled to 1km through a triangular relationship between MODIS land surface temperature (MYD11A1), vegetation indices (MOD13Q1/MYD13Q1), and brightness temperature. Downscaled soil moisture values are then used to scale PET to actual ET (AET) at a daily, 1km resolution. Derived AET estimates are compared to observed flux tower estimates, the North American Land Data Assimilation System (NLDAS) model output (i.e. Variable Infiltration Capacity (VIC) Macroscale Hydrologic Model, Mosiac Model, and Noah Model simulations), the Operational Simplified Surface Energy Balance Model (SSEBop), and a calibrated empirical ET model created specifically for the region. Preliminary results indicate a strong increase in correlation when incorporating the downscaling technique to original AMSR2 and SMOS soil moisture values, with the added benefit of being able to decipher small scale heterogeneity in soil moisture (riparian versus desert grassland). AET results show strong correlations with relatively low error and bias when compared to flux tower estimates. In addition, AET results show improved bias to those reported by SSEBop, with similar correlations and errors when compared to the empirical ET model. Spatial patterns of estimated AET display patterns representative of the basin's elevation and vegetation characteristics, with improved spatial resolution and temporal heterogeneity when compared to previous models.

  6. A formal and data-based comparison of measures of motor-equivalent covariation.

    PubMed

    Verrel, Julius

    2011-09-15

    Different analysis methods have been developed for assessing motor-equivalent organization of movement variability. In the uncontrolled manifold (UCM) method, the structure of variability is analyzed by comparing goal-equivalent and non-goal-equivalent variability components at the level of elemental variables (e.g., joint angles). In contrast, in the covariation by randomization (CR) approach, motor-equivalent organization is assessed by comparing variability at the task level between empirical and decorrelated surrogate data. UCM effects can be due to both covariation among elemental variables and selective channeling of variability to elemental variables with low task sensitivity ("individual variation"), suggesting a link between the UCM and CR method. However, the precise relationship between the notion of covariation in the two approaches has not been analyzed in detail yet. Analysis of empirical and simulated data from a study on manual pointing shows that in general the two approaches are not equivalent, but the respective covariation measures are highly correlated (ρ > 0.7) for two proposed definitions of covariation in the UCM context. For one-dimensional task spaces, a formal comparison is possible and in fact the two notions of covariation are equivalent. In situations in which individual variation does not contribute to UCM effects, for which necessary and sufficient conditions are derived, this entails the equivalence of the UCM and CR analysis. Implications for the interpretation of UCM effects are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Why do generic drugs fail to achieve an adequate market share in Greece? Empirical findings and policy suggestions.

    PubMed

    Balasopoulos, T; Charonis, A; Athanasakis, K; Kyriopoulos, J; Pavi, E

    2017-03-01

    Since 2010, the memoranda of understanding were implemented in Greece as a measure of fiscal adjustment. Public pharmaceutical expenditure was one of the main focuses of this implementation. Numerous policies, targeted on pharma spending, reduced the pharmaceutical budget by 60.5%. Yet, generics' penetration in Greece remained among the lowest among OECD countries. This study aims to highlight the factors that affect the perceptions of the population on generic drugs and to suggest effective policy measures. The empirical analysis is based on a national cross-sectional survey that was conducted through a sample of 2003 individuals, representative of the general population. Two ordinal logistic regression models were constructed in order to identify the determinants that affect the respondents' beliefs on the safety and the effectiveness of generic drugs. The empirical findings presented a positive and statistically significant correlation with income, bill payment difficulties, safety and effectiveness of drugs, prescription and dispensing preferences and the views toward pharmaceutical companies. Also, age and trust toward medical community have a positive and statistically significant correlation with the perception on the safety of generic drugs. Policy interventions are suggested on the bases of the empirical results on 3 major categories; (a) information campaigns, (b) incentives to doctors and pharmacists and (c) to strengthen the bioequivalence control framework and the dissemination of results. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Preequating with Empirical Item Characteristic Curves: An Observed-Score Preequating Method

    ERIC Educational Resources Information Center

    Zu, Jiyun; Puhan, Gautam

    2014-01-01

    Preequating is in demand because it reduces score reporting time. In this article, we evaluated an observed-score preequating method: the empirical item characteristic curve (EICC) method, which makes preequating without item response theory (IRT) possible. EICC preequating results were compared with a criterion equating and with IRT true-score…

  9. A Comparison of Two Scoring Methods for an Automated Speech Scoring System

    ERIC Educational Resources Information Center

    Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David

    2012-01-01

    This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…

  10. A numerical method for computing unsteady 2-D boundary layer flows

    NASA Technical Reports Server (NTRS)

    Krainer, Andreas

    1988-01-01

    A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.

  11. Record statistics of financial time series and geometric random walks

    NASA Astrophysics Data System (ADS)

    Sabir, Behlool; Santhanam, M. S.

    2014-09-01

    The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.

  12. From empirical Bayes to full Bayes : methods for analyzing traffic safety data.

    DOT National Transportation Integrated Search

    2004-10-24

    Traffic safety engineers are among the early adopters of Bayesian statistical tools for : analyzing crash data. As in many other areas of application, empirical Bayes methods were : their first choice, perhaps because they represent an intuitively ap...

  13. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  14. Estimated correlation matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2004-11-01

    Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs consistently well in all the investigated cases. Based on this experience, we believe that our simulation-based approach can also be useful for the systematic investigation of several related problems of current interest in finance.

  15. xEMD procedures as a data - Assisted filtering method

    NASA Astrophysics Data System (ADS)

    Machrowska, Anna; Jonak, Józef

    2018-01-01

    The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.

  16. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  17. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodge, D. A.; Harris, D. B.

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  18. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE PAGES

    Dodge, D. A.; Harris, D. B.

    2016-03-15

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  19. Investigations of the pushability behavior of cardiovascular angiographic catheters.

    PubMed

    Bloss, Peter; Rothe, Wolfgang; Wünsche, Peter; Werner, Christian; Rothe, Alexander; Kneissl, Georg Dieter; Burger, Wolfram; Rehberg, Elisabeth

    2003-01-01

    The placement of angiographic catheters into the vascular system is a routine procedure in modern clinical business. The definition of objective but not yet available evaluation protocols based on measurable physical quantities correlated to the empirical clinical findings is of utmost importance for catheter manufacturers for in-house product screening and optimization. In this context, we present an assessment of multiple mechanical and surface catheter properties such as static and kinetic friction, bending stiffness, microscopic surface topology, surface roughness, surface free energy and their interrelation. Theoretical framework, description of experimental methods and extensive data measured on several different catheters are provided and in conclusion a testing procedure is defined. Although this procedure is based on the measurement of several physical quantities it can be easily implemented by commercial laboratories testing catheters as it is based on relatively low-cost standard methods.

  20. The momentum transfer of incompressible turbulent separated flow due to cavities with steps

    NASA Technical Reports Server (NTRS)

    White, R. E.; Norton, D. J.

    1977-01-01

    An experimental study was conducted using a plate test bed having a turbulent boundary layer to determine the momentum transfer to the faces of step/cavity combinations on the plate. Experimental data were obtained from configurations including an isolated configuration and an array of blocks in tile patterns. A momentum transfer correlation model of pressure forces on an isolated step/cavity was developed with experimental results to relate flow and geometry parameters. Results of the experiments reveal that isolated step/cavity excrecences do not have a unique and unifying parameter group due in part to cavity depth effects and in part to width parameter scale effects. Drag predictions for tile patterns by a kinetic pressure empirical method predict experimental results well. Trends were not, however, predicted by a method of variable roughness density phenomenology.

Top