Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
Adaptive interference cancel filter for evoked potential using high-order cumulants.
Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei
2004-01-01
This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.
Response properties of ON-OFF retinal ganglion cells to high-order stimulus statistics.
Xiao, Lei; Gong, Han-Yan; Gong, Hai-Qing; Liang, Pei-Ji; Zhang, Pu-Ming
2014-10-17
The visual stimulus statistics are the fundamental parameters to provide the reference for studying visual coding rules. In this study, the multi-electrode extracellular recording experiments were designed and implemented on bullfrog retinal ganglion cells to explore the neural response properties to the changes in stimulus statistics. The changes in low-order stimulus statistics, such as intensity and contrast, were clearly reflected in the neuronal firing rate. However, it was difficult to distinguish the changes in high-order statistics, such as skewness and kurtosis, only based on the neuronal firing rate. The neuronal temporal filtering and sensitivity characteristics were further analyzed. We observed that the peak-to-peak amplitude of the temporal filter and the neuronal sensitivity, which were obtained from either neuronal ON spikes or OFF spikes, could exhibit significant changes when the high-order stimulus statistics were changed. These results indicate that in the retina, the neuronal response properties may be reliable and powerful in carrying some complex and subtle visual information. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Qi, D.; Majda, A.
2017-12-01
A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.
Correlative weighted stacking for seismic data in the wavelet domain
Zhang, S.; Xu, Y.; Xia, J.; ,
2004-01-01
Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.
Two-Dimensional Hermite Filters Simplify the Description of High-Order Statistics of Natural Images.
Hu, Qin; Victor, Jonathan D
2016-09-01
Natural image statistics play a crucial role in shaping biological visual systems, understanding their function and design principles, and designing effective computer-vision algorithms. High-order statistics are critical for conveying local features, but they are challenging to study - largely because their number and variety is large. Here, via the use of two-dimensional Hermite (TDH) functions, we identify a covert symmetry in high-order statistics of natural images that simplifies this task. This emerges from the structure of TDH functions, which are an orthogonal set of functions that are organized into a hierarchy of ranks. Specifically, we find that the shape (skewness and kurtosis) of the distribution of filter coefficients depends only on the projection of the function onto a 1-dimensional subspace specific to each rank. The characterization of natural image statistics provided by TDH filter coefficients reflects both their phase and amplitude structure, and we suggest an intuitive interpretation for the special subspace within each rank.
NASA Astrophysics Data System (ADS)
Rubtsov, Vladimir; Kapralov, Sergey; Chalyk, Iuri; Ulianova, Onega; Ulyanov, Sergey
2013-02-01
Statistical properties of laser speckles, formed in skin and mucous of colon have been analyzed and compared. It has been demonstrated that first and second order statistics of "skin" speckles and "mucous" speckles are quite different. It is shown that speckles, formed in mucous, are not Gaussian one. Layered structure of colon mucous causes formation of speckled biospeckles. First- and second- order statistics of speckled speckles have been reviewed in this paper. Statistical properties of Fresnel and Fraunhofer doubly scattered and cascade speckles are described. Non-gaussian statistics of biospeckles may lead to high localization of intensity of coherent light in human tissue during the laser surgery. Way of suppression of highly localized non-gaussian speckles is suggested.
Local image statistics: maximum-entropy constructions and perceptual salience
Victor, Jonathan D.; Conte, Mary M.
2012-01-01
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics—including luminance distributions, pair-wise correlations, and higher-order correlations—are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions. PMID:22751397
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
Pattern statistics on Markov chains and sensitivity to parameter estimation
Nuel, Grégory
2006-01-01
Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). Results: In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation. PMID:17044916
Pattern statistics on Markov chains and sensitivity to parameter estimation.
Nuel, Grégory
2006-10-17
In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of sigma, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.
High order statistical signatures from source-driven measurements of subcritical fissile systems
NASA Astrophysics Data System (ADS)
Mattingly, John Kelly
1998-11-01
This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
On the application of Rice's exceedance statistics to atmospheric turbulence.
NASA Technical Reports Server (NTRS)
Chen, W. Y.
1972-01-01
Discrepancies produced by the application of Rice's exceedance statistics to atmospheric turbulence are examined. First- and second-order densities from several data sources have been measured for this purpose. Particular care was paid to each selection of turbulence that provides stationary mean and variance over the entire segment. Results show that even for a stationary segment of turbulence, the process is still highly non-Gaussian, in spite of a Gaussian appearance for its first-order distribution. Data also indicate strongly non-Gaussian second-order distributions. It is therefore concluded that even stationary atmospheric turbulence with a normal first-order distribution cannot be considered a Gaussian process, and consequently the application of Rice's exceedance statistics should be approached with caution.
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Data Literacy is Statistical Literacy
ERIC Educational Resources Information Center
Gould, Robert
2017-01-01
Past definitions of statistical literacy should be updated in order to account for the greatly amplified role that data now play in our lives. Experience working with high-school students in an innovative data science curriculum has shown that teaching statistical literacy, augmented by data literacy, can begin early.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data.
Carmichael, Owen; Sakhanenko, Lyudmila
2015-05-15
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data
Carmichael, Owen; Sakhanenko, Lyudmila
2015-01-01
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way. PMID:25937674
A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume
NASA Astrophysics Data System (ADS)
Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration
2017-11-01
An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.
NASA Astrophysics Data System (ADS)
Zhou, Anran; Xie, Weixin; Pei, Jihong
2018-06-01
Accurate detection of maritime targets in infrared imagery under various sea clutter conditions is always a challenging task. The fractional Fourier transform (FRFT) is the extension of the Fourier transform in the fractional order, and has richer spatial-frequency information. By combining it with the high order statistic filtering, a new ship detection method is proposed. First, the proper range of angle parameter is determined to make it easier for the ship components and background to be separated. Second, a new high order statistic curve (HOSC) at each fractional frequency point is designed. It is proved that maximal peak interval in HOSC reflects the target information, while the points outside the interval reflect the background. And the value of HOSC relative to the ship is much bigger than that to the sea clutter. Then, search the curve's maximal target peak interval and extract the interval by bandpass filtering in fractional Fourier domain. The value outside the peak interval of HOSC decreases rapidly to 0, so the background is effectively suppressed. Finally, the detection result is obtained by the double threshold segmenting and the target region selection method. The results show the proposed method is excellent for maritime targets detection with high clutters.
Black Females in High School: A Statistical Educational Profile
ERIC Educational Resources Information Center
Muhammad, Crystal Gafford; Dixson, Adrienne D.
2008-01-01
In life as in literature, both the mainstream public and the Black community writ large, overlook the Black female experiences, both adolescent and adult. In order to contribute to the knowledge base regarding this population, we present through our study a statistical portrait of Black females in high school. To do so, we present an analysis of…
Order statistics applied to the most massive and most distant galaxy clusters
NASA Astrophysics Data System (ADS)
Waizmann, J.-C.; Ettori, S.; Bartelmann, M.
2013-06-01
In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.
Koike-Akino, Toshiaki; Duan, Chunjie; Parsons, Kieran; Kojima, Keisuke; Yoshida, Tsuyoshi; Sugihara, Takashi; Mizuochi, Takashi
2012-07-02
Fiber nonlinearity has become a major limiting factor to realize ultra-high-speed optical communications. We propose a fractionally-spaced equalizer which exploits a trained high-order statistics to deal with data-pattern dependent nonlinear impairments in fiber-optic communications. The computer simulation reveals that the proposed 3-tap equalizer improves Q-factor by more than 2 dB for long-haul transmissions of 5,230 km distance and 40 Gbps data rate. We also demonstrate that the joint use of a digital backpropagation (DBP) and the proposed equalizer offers an additional 1-2 dB performance improvement due to the channel shortening gain. A performance in high-speed transmissions of 100 Gbps and beyond is evaluated as well.
Restoration of MRI Data for Field Nonuniformities using High Order Neighborhood Statistics
Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert
2007-01-01
MRI at high magnetic fields (> 3.0 T ) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to nonuniformity of image intensity, greatly complicating further analysis such as registration and segmentation. Existing methods for bias field correction are effective for 1.5 T or 3.0 T MRI, but are not completely satisfactory for higher field data. This paper develops an effective bias field correction for high field MRI based on the assumption that the nonuniformity is smoothly varying in space. Also, nonuniformity is quantified and unmixed using high order neighborhood statistics of intensity cooccurrences. They are computed within spherical windows of limited size over the entire image. The restoration is iterative and makes use of a novel stable stopping criterion that depends on the scaled entropy of the cooccurrence statistics, which is a non monotonic function of the iterations; the Shannon entropy of the cooccurrence statistics normalized to the effective dynamic range of the image. The algorithm restores whole head data, is robust to intense nonuniformities present in high field acquisitions, and is robust to variations in anatomy. This algorithm significantly improves bias field correction in comparison to N3 on phantom 1.5 T head data and high field 4 T human head data. PMID:18193095
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
Subband Image Coding with Jointly Optimized Quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.
1995-01-01
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.
Sapsis, Themistoklis P; Majda, Andrew J
2013-08-20
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.
Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm
NASA Astrophysics Data System (ADS)
Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting
2010-12-01
We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.
NASA Astrophysics Data System (ADS)
Qi, Di
Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are applied in the training phase for calibrating model errors to achieve optimal imperfect model parameters; and total statistical energy dynamics are introduced to improve the model sensitivity in the prediction phase especially when strong external perturbations are exerted. The validity of reduced-order models for predicting statistical responses and intermittency is demonstrated on a series of instructive models with increasing complexity, including the stochastic triad model, the Lorenz '96 model, and models for barotropic and baroclinic turbulence. The skillful low-order modeling methods developed here should also be useful for other applications such as efficient algorithms for data assimilation.
Sensors and signal processing for high accuracy passenger counting : final report.
DOT National Transportation Integrated Search
2009-03-05
It is imperative for a transit system to track statistics about their ridership in order to plan bus routes. There exists a wide variety of methods for obtaining these statistics that range from relying on the driver to count people to utilizing came...
A High-Resolution Capability for Large-Eddy Simulation of Jet Flows
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2011-01-01
A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.
NASA Astrophysics Data System (ADS)
Berg, Jacob; Patton, Edward G.; Sullivan, Peter S.
2017-11-01
The effect of mesh resolution and size on shear driven atmospheric boundary layers in a stable stratified environment is investigated with the NCAR pseudo-spectral LES model (J. Atmos. Sci. v68, p2395, 2011 and J. Atmos. Sci. v73, p1815, 2016). The model applies FFT in the two horizontal directions and finite differencing in the vertical direction. With vanishing heat flux at the surface and a capping inversion entraining potential temperature into the boundary layer the situation is often called the conditional neutral atmospheric boundary layer (ABL). Due to its relevance in high wind applications such as wind power meteorology, we emphasize on second order statistics important for wind turbines including spectral information. The simulations range from mesh sizes of 643 to 10243 grid points. Due to the non-stationarity of the problem, different simulations are compared at equal eddy-turnover times. Whereas grid convergence is mostly achieved in the middle portion of the ABL, statistics close to the surface of the ABL, where the presence of the ground limits the growth of the energy containing eddies, second order statistics are not converged on the studies meshes. Higher order structure functions also reveal non-Gaussian statistics highly dependent on the resolution.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
Many-Body Localization and Thermalization in Quantum Statistical Mechanics
NASA Astrophysics Data System (ADS)
Nandkishore, Rahul; Huse, David A.
2015-03-01
We review some recent developments in the statistical mechanics of isolated quantum systems. We provide a brief introduction to quantum thermalization, paying particular attention to the eigenstate thermalization hypothesis (ETH) and the resulting single-eigenstate statistical mechanics. We then focus on a class of systems that fail to quantum thermalize and whose eigenstates violate the ETH: These are the many-body Anderson-localized systems; their long-time properties are not captured by the conventional ensembles of quantum statistical mechanics. These systems can forever locally remember information about their local initial conditions and are thus of interest for possibilities of storing quantum information. We discuss key features of many-body localization (MBL) and review a phenomenology of the MBL phase. Single-eigenstate statistical mechanics within the MBL phase reveal dynamically stable ordered phases, and phase transitions among them, that are invisible to equilibrium statistical mechanics and can occur at high energy and low spatial dimensionality, where equilibrium ordering is forbidden.
More than words: Adults learn probabilities over categories and relationships between them.
Hudson Kam, Carla L
2009-04-01
This study examines whether human learners can acquire statistics over abstract categories and their relationships to each other. Adult learners were exposed to miniature artificial languages containing variation in the ordering of the Subject, Object, and Verb constituents. Different orders (e.g. SOV, VSO) occurred in the input with different frequencies, but the occurrence of one order versus another was not predictable. Importantly, the language was constructed such that participants could only match the overall input probabilities if they were tracking statistics over abstract categories, not over individual words. At test, participants reproduced the probabilities present in the input with a high degree of accuracy. Closer examination revealed that learner's were matching the probabilities associated with individual verbs rather than the category as a whole. However, individual nouns had no impact on word orders produced. Thus, participants learned the probabilities of a particular ordering of the abstract grammatical categories Subject and Object associated with each verb. Results suggest that statistical learning mechanisms are capable of tracking relationships between abstract linguistic categories in addition to individual items.
Nasri Nasrabadi, Mohammad Reza; Razavi, Seyed Hadi
2010-04-01
In this work, we applied statistical experimental design to a fed-batch process for optimization of tricarboxylic acid cycle (TCA) intermediates in order to achieve high-level production of canthaxanthin from Dietzia natronolimnaea HS-1 cultured in beet molasses. A fractional factorial design (screening test) was first conducted on five TCA cycle intermediates. Out of the five TCA cycle intermediates investigated via screening tests, alfaketoglutarate, oxaloacetate and succinate were selected based on their statistically significant (P<0.05) and positive effects on canthaxanthin production. These significant factors were optimized by means of response surface methodology (RSM) in order to achieve high-level production of canthaxanthin. The experimental results of the RSM were fitted with a second-order polynomial equation by means of a multiple regression technique to identify the relationship between canthaxanthin production and the three TCA cycle intermediates. By means of this statistical design under a fed-batch process, the optimum conditions required to achieve the highest level of canthaxanthin (13172 + or - 25 microg l(-1)) were determined as follows: alfaketoglutarate, 9.69 mM; oxaloacetate, 8.68 mM; succinate, 8.51 mM. Copyright 2009 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Lambert, Nathaniel D.; Pankratz, V. Shane; Larrabee, Beth R.; Ogee-Nwankwo, Adaeze; Chen, Min-hsin; Icenogle, Joseph P.
2014-01-01
Rubella remains a social and economic burden due to the high incidence of congenital rubella syndrome (CRS) in some countries. For this reason, an accurate and efficient high-throughput measure of antibody response to vaccination is an important tool. In order to measure rubella-specific neutralizing antibodies in a large cohort of vaccinated individuals, a high-throughput immunocolorimetric system was developed. Statistical interpolation models were applied to the resulting titers to refine quantitative estimates of neutralizing antibody titers relative to the assayed neutralizing antibody dilutions. This assay, including the statistical methods developed, can be used to assess the neutralizing humoral immune response to rubella virus and may be adaptable for assessing the response to other viral vaccines and infectious agents. PMID:24391140
Second-order near-wall turbulence closures - A review
NASA Technical Reports Server (NTRS)
So, R. M. C.; Lai, Y. G.; Zhang, H. S.; Hwang, B. C.
1991-01-01
Advances in second-order near-wall turbulence closures are summarized. All closures under consideration are based on high-Reynolds-number models. Most near-wall closures proposed to date attempt to modify the high-Reynolds-number models for the dissipation function and the pressure redistribution term so that the resultant models are applicable all the way to the wall. The asymptotic behavior of the near-wall closures is examined and compared with the proper near-wall behavior of the exact Reynolds-stress equations. It is found that three second-order near-wall closures give the best correlations with simulated turbulence statistics. However, their predictions of near-wall Reynolds-stress budgets are considered to be incorrect. A proposed modification to the dissipitation-rate equation remedies part of those predictions. It is concluded that further improvements are required if a complete replication of all the turbulence properties and Reynolds-stress budgets by a statistical model of turbulence is desirable.
The Developing Infant Creates a Curriculum for Statistical Learning.
Smith, Linda B; Jayaraman, Swapnaa; Clerkin, Elizabeth; Yu, Chen
2018-04-01
New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computational models will be necessary to connect the developmentally changing content and statistics of infant experience to the internal machinery that does the learning. Copyright © 2018 Elsevier Ltd. All rights reserved.
Equilibrium statistical-thermal models in high-energy physics
NASA Astrophysics Data System (ADS)
Tawfik, Abdel Nasser
2014-05-01
We review some recent highlights from the applications of statistical-thermal models to different experimental measurements and lattice QCD thermodynamics that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical-thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948, an almost complete recipe for the statistical-thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it, the simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analyzed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-particle systems can be studied with the help of statistical-thermal methods. The analysis of yield multiplicities in high-energy collisions gives an overwhelming evidence for the chemical equilibrium in the final state. The strange particles might be an exception, as they are suppressed at lower beam energies. However, their relative yields fulfill statistical equilibrium, as well. We review the equilibrium statistical-thermal models for particle production, fluctuations and collective flow in heavy-ion experiments. We also review their reproduction of the lattice QCD thermodynamics at vanishing and finite chemical potential. During the last decade, five conditions have been suggested to describe the universal behavior of the chemical freeze-out parameters. The higher order moments of multiplicity have been discussed. They offer deep insights about particle production and to critical fluctuations. Therefore, we use them to describe the freeze-out parameters and suggest the location of the QCD critical endpoint. Various extensions have been proposed in order to take into consideration the possible deviations of the ideal hadron gas. We highlight various types of interactions, dissipative properties and location-dependences (spatial rapidity). Furthermore, we review three models combining hadronic with partonic phases; quasi-particle model, linear sigma model with Polyakov potentials and compressible bag model.
NASA Astrophysics Data System (ADS)
Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.
2007-03-01
Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu
2017-11-01
This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.
Data Sharing and the Development of the Cleveland Clinic Statistical Education Dataset Repository
ERIC Educational Resources Information Center
Nowacki, Amy S.
2013-01-01
Examples are highly sought by both students and teachers. This is particularly true as many statistical instructors aim to engage their students and increase active participation. While simulated datasets are functional, they lack real perspective and the intricacies of actual data. In order to obtain real datasets, the principal investigator of a…
Design of order statistics filters using feedforward neural networks
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Bochkarev, V. V.
2016-08-01
In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.
NASA Astrophysics Data System (ADS)
Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.
We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.
Δim-lacunary statistical convergence of order α
NASA Astrophysics Data System (ADS)
Altınok, Hıfsı; Et, Mikail; Işık, Mahmut
2018-01-01
The purpose of this work is to introduce the concepts of Δim-lacunary statistical convergence of order α and lacunary strongly (Δim,p )-convergence of order α. We establish some connections between lacunary strongly (Δim,p )-convergence of order α and Δim-lacunary statistical convergence of order α. It is shown that if a sequence is lacunary strongly (Δim,p )-summable of order α then it is Δim-lacunary statistically convergent of order α.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chasapis, Alexandros; Matthaeus, W. H.; Parashar, T. N.
Using data from the Magnetospheric Multiscale (MMS) and Cluster missions obtained in the solar wind, we examine second-order and fourth-order structure functions at varying spatial lags normalized to ion inertial scales. The analysis includes direct two-spacecraft results and single-spacecraft results employing the familiar Taylor frozen-in flow approximation. Several familiar statistical results, including the spectral distribution of energy, and the sale-dependent kurtosis, are extended down to unprecedented spatial scales of ∼6 km, approaching electron scales. The Taylor approximation is also confirmed at those small scales, although small deviations are present in the kinetic range. The kurtosis is seen to attain verymore » high values at sub-proton scales, supporting the previously reported suggestion that monofractal behavior may be due to high-frequency plasma waves at kinetic scales.« less
Rough-pipe flows and the existence of fully developed turbulence
NASA Astrophysics Data System (ADS)
Gioia, G.; Chakraborty, Pinaki; Bombardelli, Fabián A.
2006-03-01
It is widely believed that at high Reynolds number (Re) all turbulent flows approach a limiting state of "fully developed turbulence" in which the statistics of the velocity fluctuations are independent of Re. Nevertheless, direct measurements of the velocity fluctuations have failed to yield firm empirical evidence that even the second-order structure function becomes independent of Re at high Re, let alone structure functions of higher order. Here we relate the friction coefficient (f) of rough-pipe flows to the second-order structure function. Then we show that in light of experimental measurements of f our results yield unequivocal evidence that the second-order structure function becomes independent of Re at high Re, compatible with the existence of fully developed turbulence.
Synchronization from Second Order Network Connectivity Statistics
Zhao, Liqiong; Beverlin, Bryce; Netoff, Theoden; Nykamp, Duane Q.
2011-01-01
We investigate how network structure can influence the tendency for a neuronal network to synchronize, or its synchronizability, independent of the dynamical model for each neuron. The synchrony analysis takes advantage of the framework of second order networks, which defines four second order connectivity statistics based on the relative frequency of two-connection network motifs. The analysis identifies two of these statistics, convergent connections, and chain connections, as highly influencing the synchrony. Simulations verify that synchrony decreases with the frequency of convergent connections and increases with the frequency of chain connections. These trends persist with simulations of multiple models for the neuron dynamics and for different types of networks. Surprisingly, divergent connections, which determine the fraction of shared inputs, do not strongly influence the synchrony. The critical role of chains, rather than divergent connections, in influencing synchrony can be explained by their increasing the effective coupling strength. The decrease of synchrony with convergent connections is primarily due to the resulting heterogeneity in firing rates. PMID:21779239
Statistics of spatial derivatives of velocity and pressure in turbulent channel flow
NASA Astrophysics Data System (ADS)
Vreman, A. W.; Kuerten, J. G. M.
2014-08-01
Statistical profiles of the first- and second-order spatial derivatives of velocity and pressure are reported for turbulent channel flow at Reτ = 590. The statistics were extracted from a high-resolution direct numerical simulation. To quantify the anisotropic behavior of fine-scale structures, the variances of the derivatives are compared with the theoretical values for isotropic turbulence. It is shown that appropriate combinations of first- and second-order velocity derivatives lead to (directional) viscous length scales without explicit occurrence of the viscosity in the definitions. To quantify the non-Gaussian and intermittent behavior of fine-scale structures, higher-order moments and probability density functions of spatial derivatives are reported. Absolute skewnesses and flatnesses of several spatial derivatives display high peaks in the near wall region. In the logarithmic and central regions of the channel flow, all first-order derivatives appear to be significantly more intermittent than in isotropic turbulence at the same Taylor Reynolds number. Since the nine variances of first-order velocity derivatives are the distinct elements of the turbulence dissipation, the budgets of these nine variances are shown, together with the budget of the turbulence dissipation. The comparison of the budgets in the near-wall region indicates that the normal derivative of the fluctuating streamwise velocity (∂u'/∂y) plays a more important role than other components of the fluctuating velocity gradient. The small-scale generation term formed by triple correlations of fluctuations of first-order velocity derivatives is analyzed. A typical mechanism of small-scale generation near the wall (around y+ = 1), the intensification of positive ∂u'/∂y by local strain fluctuation (compression in normal and stretching in spanwise direction), is illustrated and discussed.
Restoration of MRI data for intensity non-uniformities using local high order intensity statistics
Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert
2008-01-01
MRI at high magnetic fields (>3.0 T) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to non-biological intensity non-uniformities across the image. They can complicate further image analysis such as registration and tissue segmentation. Existing methods for intensity uniformity restoration have been optimized for 1.5 T, but they are less effective for 3.0 T MRI, and not at all satisfactory for higher fields. Also, many of the existing restoration algorithms require a brain template or use a prior atlas, which can restrict their practicalities. In this study an effective intensity uniformity restoration algorithm has been developed based on non-parametric statistics of high order local intensity co-occurrences. These statistics are restored with a non-stationary Wiener filter. The algorithm also assumes a smooth non-uniformity and is stable. It does not require a prior atlas and is robust to variations in anatomy. In geriatric brain imaging it is robust to variations such as enlarged ventricles and low contrast to noise ratio. The co-occurrence statistics improve robustness to whole head images with pronounced non-uniformities present in high field acquisitions. Its significantly improved performance and lower time requirements have been demonstrated by comparing it to the very commonly used N3 algorithm on BrainWeb MR simulator images as well as on real 4 T human head images. PMID:18621568
Increasing the statistical significance of entanglement detection in experiments.
Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei
2010-05-28
Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.
Dickerson, James H.; Krejci, Alex J.; Garcia, Adriana -Mendoza; ...
2015-08-01
Ordered assemblies of nanoparticles remain challenging to fabricate, yet could open the door to many potential applications of nanomaterials. Here, we demonstrate that locally ordered arrays of nanoparticles, using electrophoretic deposition, can be extended to produce long-range order among the constituents. Voronoi tessellations along with multiple statistical analyses show dramatic increases in order compared with previously reported assemblies formed through electric field-assisted assembly. As a result, based on subsequent physical measurements of the nanoparticles and the deposition system, the underlying mechanisms that generate increased order are inferred.
Removal of impulse noise clusters from color images with local order statistics
NASA Astrophysics Data System (ADS)
Ruchay, Alexey; Kober, Vitaly
2017-09-01
This paper proposes a novel algorithm for restoring images corrupted with clusters of impulse noise. The noise clusters often occur when the probability of impulse noise is very high. The proposed noise removal algorithm consists of detection of bulky impulse noise in three color channels with local order statistics followed by removal of the detected clusters by means of vector median filtering. With the help of computer simulation we show that the proposed algorithm is able to effectively remove clustered impulse noise. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.
Residual confounding explains the association between high parity and child mortality.
Kozuki, Naoko; Sonneveldt, Emily; Walker, Neff
2013-01-01
This study used data from recent Demographic and Health Surveys (DHS) to examine the impact of high parity on under-five and neonatal mortality. The analyses used various techniques to attempt eliminating selection issues, including stratification of analyses by mothers' completed fertility. We analyzed DHS datasets from 47 low- and middle-income countries. We only used data from women who were age 35 or older at the time of survey to have a measure of their completed fertility. We ran log-binominal regression by country to calculate relative risk between parity and both under-five and neonatal mortality, controlled for wealth quintile, maternal education, urban versus rural residence, maternal age at first birth, calendar year (to control for possible time trends), and birth interval. We then controlled for maternal background characteristics even further by using mothers' completed fertility as a proxy measure. We found a statistically significant association between high parity and child mortality. However, this association is most likely not physiological, and can be largely attributed to the difference in background characteristics of mothers who complete reproduction with high fertility versus low fertility. Children of high completed fertility mothers have statistically significantly increased risk of death compared to children of low completed fertility mothers at every birth order, even after controlling for available confounders (i.e. among children of birth order 1, adjusted RR of under-five mortality 1.58, 95% CI: 1.42, 1.76). There appears to be residual confounders that put children of high completed fertility mothers at higher risk, regardless of birth order. When we examined the association between parity and under-five mortality among mothers with high completed fertility, it remained statistically significant, but negligible in magnitude (i.e. adjusted RR of under-five mortality 1.03, 95% CI: 1.02-1.05). Our analyses strongly suggest that the observed increased risk of mortality associated with high parity births is not driven by a physiological link between parity and mortality. We found that at each birth order, children born to women who have high fertility at the end of their reproductive period are at significantly higher mortality risk than children of mothers who have low fertility, even after adjusting for available confounders. With each unit increase in birth order, a larger proportion of births at the population level belongs to mothers with these adverse characteristics correlated with high fertility. Hence it appears as if mortality rates go up with increasing parity, but not for physiological reasons.
Ritchie, Marylyn D.; Hahn, Lance W.; Roodi, Nady; Bailey, L. Renee; Dupont, William D.; Parl, Fritz F.; Moore, Jason H.
2001-01-01
One of the greatest challenges facing human geneticists is the identification and characterization of susceptibility genes for common complex multifactorial human diseases. This challenge is partly due to the limitations of parametric-statistical methods for detection of gene effects that are dependent solely or partially on interactions with other genes and with environmental exposures. We introduce multifactor-dimensionality reduction (MDR) as a method for reducing the dimensionality of multilocus information, to improve the identification of polymorphism combinations associated with disease risk. The MDR method is nonparametric (i.e., no hypothesis about the value of a statistical parameter is made), is model-free (i.e., it assumes no particular inheritance model), and is directly applicable to case-control and discordant-sib-pair studies. Using simulated case-control data, we demonstrate that MDR has reasonable power to identify interactions among two or more loci in relatively small samples. When it was applied to a sporadic breast cancer case-control data set, in the absence of any statistically significant independent main effects, MDR identified a statistically significant high-order interaction among four polymorphisms from three different estrogen-metabolism genes. To our knowledge, this is the first report of a four-locus interaction associated with a common complex multifactorial disease. PMID:11404819
High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.
Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei
2017-07-01
Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.
Examining the effects of birth order on personality
Rohrer, Julia M.; Egloff, Boris; Schmukle, Stefan C.
2015-01-01
This study examined the long-standing question of whether a person’s position among siblings has a lasting impact on that person’s life course. Empirical research on the relation between birth order and intelligence has convincingly documented that performances on psychometric intelligence tests decline slightly from firstborns to later-borns. By contrast, the search for birth-order effects on personality has not yet resulted in conclusive findings. We used data from three large national panels from the United States (n = 5,240), Great Britain (n = 4,489), and Germany (n = 10,457) to resolve this open research question. This database allowed us to identify even very small effects of birth order on personality with sufficiently high statistical power and to investigate whether effects emerge across different samples. We furthermore used two different analytical strategies by comparing siblings with different birth-order positions (i) within the same family (within-family design) and (ii) between different families (between-family design). In our analyses, we confirmed the expected birth-order effect on intelligence. We also observed a significant decline of a 10th of a SD in self-reported intellect with increasing birth-order position, and this effect persisted after controlling for objectively measured intelligence. Most important, however, we consistently found no birth-order effects on extraversion, emotional stability, agreeableness, conscientiousness, or imagination. On the basis of the high statistical power and the consistent results across samples and analytical designs, we must conclude that birth order does not have a lasting effect on broad personality traits outside of the intellectual domain. PMID:26483461
Examining the effects of birth order on personality.
Rohrer, Julia M; Egloff, Boris; Schmukle, Stefan C
2015-11-17
This study examined the long-standing question of whether a person's position among siblings has a lasting impact on that person's life course. Empirical research on the relation between birth order and intelligence has convincingly documented that performances on psychometric intelligence tests decline slightly from firstborns to later-borns. By contrast, the search for birth-order effects on personality has not yet resulted in conclusive findings. We used data from three large national panels from the United States (n = 5,240), Great Britain (n = 4,489), and Germany (n = 10,457) to resolve this open research question. This database allowed us to identify even very small effects of birth order on personality with sufficiently high statistical power and to investigate whether effects emerge across different samples. We furthermore used two different analytical strategies by comparing siblings with different birth-order positions (i) within the same family (within-family design) and (ii) between different families (between-family design). In our analyses, we confirmed the expected birth-order effect on intelligence. We also observed a significant decline of a 10th of a SD in self-reported intellect with increasing birth-order position, and this effect persisted after controlling for objectively measured intelligence. Most important, however, we consistently found no birth-order effects on extraversion, emotional stability, agreeableness, conscientiousness, or imagination. On the basis of the high statistical power and the consistent results across samples and analytical designs, we must conclude that birth order does not have a lasting effect on broad personality traits outside of the intellectual domain.
Wong, Cheuk-Yin; Wilk, Grzegorz; Cirto, Leonardo J. L.; ...
2015-06-22
Transverse spectra of both jets and hadrons obtained in high-energymore » $pp$ and $$p\\bar p $$ collisions at central rapidity exhibit power-law behavior of $$1/p_T^n$$ at high $$p_T$$. The power index $n$ is 4-5 for jet production and is slightly greater for hadron production. Furthermore, the hadron spectra spanning over 14 orders of magnitude down to the lowest $$p_T$$ region in $pp$ collisions at LHC can be adequately described by a single nonextensive statistical mechanical distribution that is widely used in other branches of science. This suggests indirectly the dominance of the hard-scattering process over essentially the whole $$p_T$$ region at central rapidity in $pp$ collisions at LHC. We show here direct evidences of such a dominance of the hard-scattering process by investigating the power index of UA1 jet spectra over an extended $$p_T$$ region and the two-particle correlation data of the STAR and PHENIX Collaborations in high-energy $pp$ and $$p \\bar p$$ collisions at central rapidity. We then study how the showering of the hard-scattering product partons alters the power index of the hadron spectra and leads to a hadron distribution that can be cast into a single-particle non-extensive statistical mechanical distribution. Lastly, because of such a connection, the non-extensive statistical mechanical distribution can be considered as a lowest-order approximation of the hard-scattering of partons followed by the subsequent process of parton showering that turns the jets into hadrons, in high energy $pp$ and $$p\\bar p$$ collisions.« less
Statistical characterization of short wind waves from stereo images of the sea surface
NASA Astrophysics Data System (ADS)
Mironov, Alexey; Yurovskaya, Maria; Dulov, Vladimir; Hauser, Danièle; Guérin, Charles-Antoine
2013-04-01
We propose a methodology to extract short-scale statistical characteristics of the sea surface topography by means of stereo image reconstruction. The possibilities and limitations of the technique are discussed and tested on a data set acquired from an oceanographic platform at the Black Sea. The analysis shows that reconstruction of the topography based on stereo method is an efficient way to derive non-trivial statistical properties of surface short- and intermediate-waves (say from 1 centimer to 1 meter). Most technical issues pertaining to this type of datasets (limited range of scales, lacunarity of data or irregular sampling) can be partially overcome by appropriate processing of the available points. The proposed technique also allows one to avoid linear interpolation which dramatically corrupts properties of retrieved surfaces. The processing technique imposes that the field of elevation be polynomially detrended, which has the effect of filtering out the large scales. Hence the statistical analysis can only address the small-scale components of the sea surface. The precise cut-off wavelength, which is approximatively half the patch size, can be obtained by applying a high-pass frequency filter on the reference gauge time records. The results obtained for the one- and two-points statistics of small-scale elevations are shown consistent, at least in order of magnitude, with the corresponding gauge measurements as well as other experimental measurements available in the literature. The calculation of the structure functions provides a powerful tool to investigate spectral and statistical properties of the field of elevations. Experimental parametrization of the third-order structure function, the so-called skewness function, is one of the most important and original outcomes of this study. This function is of primary importance in analytical scattering models from the sea surface and was up to now unavailable in field conditions. Due to the lack of precise reference measurements for the small-scale wave field, we could not quantify exactly the accuracy of the retrieval technique. However, it appeared clearly that the obtained accuracy is good enough for the estimation of second-order statistical quantities (such as the correlation function), acceptable for third-order quantities (such as the skwewness function) and insufficient for fourth-order quantities (such as kurtosis). Therefore, the stereo technique in the present stage should not be thought as a self-contained universal tool to characterize the surface statistics. Instead, it should be used in conjunction with other well calibrated but sparse reference measurement (such as wave gauges) for cross-validation and calibration. It then completes the statistical analysis in as much as it provides a snapshot of the three-dimensional field and allows for the evaluation of higher-order spatial statistics.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Lagrangian single-particle turbulent statistics through the Hilbert-Huang transform.
Huang, Yongxiang; Biferale, Luca; Calzavarini, Enrico; Sun, Chao; Toschi, Federico
2013-04-01
The Hilbert-Huang transform is applied to analyze single-particle Lagrangian velocity data from numerical simulations of hydrodynamic turbulence. The velocity trajectory is described in terms of a set of intrinsic mode functions C(i)(t) and of their instantaneous frequency ω(i)(t). On the basis of this decomposition we define the ω-conditioned statistical moments of the C(i) modes, named q-order Hilbert spectra (HS). We show that such quantities have enhanced scaling properties as compared to traditional Fourier transform- or correlation-based (structure functions) statistical indicators, thus providing better insights into the turbulent energy transfer process. We present clear empirical evidence that the energylike quantity, i.e., the second-order HS, displays a linear scaling in time in the inertial range, as expected from a dimensional analysis. We also measure high-order moment scaling exponents in a direct way, without resorting to the extended self-similarity procedure. This leads to an estimate of the Lagrangian structure function exponents which are consistent with the multifractal prediction in the Lagrangian frame as proposed by Biferale et al. [Phys. Rev. Lett. 93, 064502 (2004)].
Morphological representation of order-statistics filters.
Charif-Chefchaouni, M; Schonfeld, D
1995-01-01
We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.
Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.
Zhou, Weidong; Gotman, Jean
2004-01-01
In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.
High-throughput electrical measurement and microfluidic sorting of semiconductor nanowires.
Akin, Cevat; Feldman, Leonard C; Durand, Corentin; Hus, Saban M; Li, An-Ping; Hui, Ho Yee; Filler, Michael A; Yi, Jingang; Shan, Jerry W
2016-05-24
Existing nanowire electrical characterization tools not only are expensive and require sophisticated facilities, but are far too slow to enable statistical characterization of highly variable samples. They are also generally not compatible with further sorting and processing of nanowires. Here, we demonstrate a high-throughput, solution-based electro-orientation-spectroscopy (EOS) method, which is capable of automated electrical characterization of individual nanowires by direct optical visualization of their alignment behavior under spatially uniform electric fields of different frequencies. We demonstrate that EOS can quantitatively characterize the electrical conductivities of nanowires over a 6-order-of-magnitude range (10(-5) to 10 S m(-1), corresponding to typical carrier densities of 10(10)-10(16) cm(-3)), with different fluids used to suspend the nanowires. By implementing EOS in a simple microfluidic device, continuous electrical characterization is achieved, and the sorting of nanowires is demonstrated as a proof-of-concept. With measurement speeds two orders of magnitude faster than direct-contact methods, the automated EOS instrument enables for the first time the statistical characterization of highly variable 1D nanomaterials.
Towards bridging the gap between climate change projections and maize producers in South Africa
NASA Astrophysics Data System (ADS)
Landman, Willem A.; Engelbrecht, Francois; Hewitson, Bruce; Malherbe, Johan; van der Merwe, Jacobus
2018-05-01
Multi-decadal regional projections of future climate change are introduced into a linear statistical model in order to produce an ensemble of austral mid-summer maximum temperature simulations for southern Africa. The statistical model uses atmospheric thickness fields from a high-resolution (0.5° × 0.5°) reanalysis-forced simulation as predictors in order to develop a linear recalibration model which represents the relationship between atmospheric thickness fields and gridded maximum temperatures across the region. The regional climate model, the conformal-cubic atmospheric model (CCAM), projects maximum temperatures increases over southern Africa to be in the order of 4 °C under low mitigation towards the end of the century or even higher. The statistical recalibration model is able to replicate these increasing temperatures, and the atmospheric thickness-maximum temperature relationship is shown to be stable under future climate conditions. Since dry land crop yields are not explicitly simulated by climate models but are sensitive to maximum temperature extremes, the effect of projected maximum temperature change on dry land crops of the Witbank maize production district of South Africa, assuming other factors remain unchanged, is then assessed by employing a statistical approach similar to the one used for maximum temperature projections.
The (mis)reporting of statistical results in psychology journals.
Bakker, Marjan; Wicherts, Jelte M
2011-09-01
In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers' expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.
Tamashiro, M N; Barbetta, C; Germano, R; Henriques, V B
2011-09-01
We propose a statistical model to account for the gel-fluid anomalous phase transitions in charged bilayer- or lamellae-forming ionic lipids. The model Hamiltonian comprises effective attractive interactions to describe neutral-lipid membranes as well as the effect of electrostatic repulsions of the discrete ionic charges on the lipid headgroups. The latter can be counterion dissociated (charged) or counterion associated (neutral), while the lipid acyl chains may be in gel (low-temperature or high-lateral-pressure) or fluid (high-temperature or low-lateral-pressure) states. The system is modeled as a lattice gas with two distinct particle types--each one associated, respectively, with the polar-headgroup and the acyl-chain states--which can be mapped onto an Ashkin-Teller model with the inclusion of cubic terms. The model displays a rich thermodynamic behavior in terms of the chemical potential of counterions (related to added salt concentration) and lateral pressure. In particular, we show the existence of semidissociated thermodynamic phases related to the onset of charge order in the system. This type of order stems from spatially ordered counterion association to the lipid headgroups, in which charged and neutral lipids alternate in a checkerboard-like order. Within the mean-field approximation, we predict that the acyl-chain order-disorder transition is discontinuous, with the first-order line ending at a critical point, as in the neutral case. Moreover, the charge order gives rise to continuous transitions, with the associated second-order lines joining the aforementioned first-order line at critical end points. We explore the thermodynamic behavior of some physical quantities, like the specific heat at constant lateral pressure and the degree of ionization, associated with the fraction of charged lipid headgroups.
Investigation of the delay time distribution of high power microwave surface flashover
NASA Astrophysics Data System (ADS)
Foster, J.; Krompholz, H.; Neuber, A.
2011-01-01
Characterizing and modeling the statistics associated with the initiation of gas breakdown has proven to be difficult due to a variety of rather unexplored phenomena involved. Experimental conditions for high power microwave window breakdown for pressures on the order of 100 to several 100 torr are complex: there are little to no naturally occurring free electrons in the breakdown region. The initial electron generation rate, from an external source, for example, is time dependent and so is the charge carrier amplification in the increasing radio frequency (RF) field amplitude with a rise time of 50 ns, which can be on the same order as the breakdown delay time. The probability of reaching a critical electron density within a given time period is composed of the statistical waiting time for the appearance of initiating electrons in the high-field region and the build-up of an avalanche with an inherent statistical distribution of the electron number. High power microwave breakdown and its delay time is of critical importance, since it limits the transmission through necessary windows, especially for high power, high altitude, low pressure applications. The delay time distribution of pulsed high power microwave surface flashover has been examined for nitrogen and argon as test gases for pressures ranging from 60 to 400 torr, with and without external UV illumination. A model has been developed for predicting the discharge delay time for these conditions. The results provide indications that field induced electron generation, other than standard field emission, plays a dominant role, which might be valid for other gas discharge types as well.
NASA Astrophysics Data System (ADS)
Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico
2005-05-01
In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.
Laser diagnostics of native cervix dabs with human papilloma virus in high carcinogenic risk
NASA Astrophysics Data System (ADS)
Peresunko, O. P.; Karpenko, Ju. G.; Burkovets, D. N.; Ivashko, P. V.; Nikorych, A. V.; Yermolenko, S. B.; Gruia, Ion; Gruia, M. J.
2015-11-01
The results of experimental studies of coordinate distributions of Mueller matrix elements of the following types of cervical scraping tissue are presented: rate- low-grade - highly differentiated dysplasia (CIN1-CIN3) - adenocarcinoma of high, medium and low levels of differentiation (G1-G3). The rationale for the choice of statistical points 1-4 orders polarized coherent radiation field, transformed as a result of interaction with the oncologic modified biological layers "epithelium-stroma" as a quantitative criterion of polarimetric optical differentiation state of human biological tissues are shown here. The analysis of the obtained Mueller matrix elements and statistical correlation methods, the systematized by types studied tissues is accomplished. The results of research images of Mueller matrix elements m34 for this type of pathology as low-grade dysplasia (CIN2), the results of its statistical and correlation analysis are presented.
λ (Δim) -statistical convergence of order α
NASA Astrophysics Data System (ADS)
Colak, Rifat; Et, Mikail; Altin, Yavuz
2017-09-01
In this study, using the generalized difference operator Δim and a sequence λ = (λn) which is a non-decreasing sequence of positive numbers tending to ∞ such that λn+1 ≤ λn+1, λ1 = 1, we introduce the concepts of λ (Δim) -statistical convergence of order α (α ∈ (0, 1]) and strong λ (Δim) -Cesàro summablility of order α (α > 0). We establish some connections between λ (Δim) -statistical convergence of order α and strong λ (Δim) -Cesàro summablility of order α. It is shown that if a sequence is strongly λ (Δim) -Cesàro summable of order α, then it is λ (Δim) -statistically convergent of order β in case 0 < α ≤ β ≤ 1.
Robust Combining of Disparate Classifiers Through Order Statistics
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2001-01-01
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.
Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A
2018-05-28
To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.
Detector noise statistics in the non-linear regime
NASA Technical Reports Server (NTRS)
Shopbell, P. L.; Bland-Hawthorn, J.
1992-01-01
The statistical behavior of an idealized linear detector in the presence of threshold and saturation levels is examined. It is assumed that the noise is governed by the statistical fluctuations in the number of photons emitted by the source during an exposure. Since physical detectors cannot have infinite dynamic range, our model illustrates that all devices have non-linear regimes, particularly at high count rates. The primary effect is a decrease in the statistical variance about the mean signal due to a portion of the expected noise distribution being removed via clipping. Higher order statistical moments are also examined, in particular, skewness and kurtosis. In principle, the expected distortion in the detector noise characteristics can be calibrated using flatfield observations with count rates matched to the observations. For this purpose, some basic statistical methods that utilize Fourier analysis techniques are described.
Adaptation to changes in higher-order stimulus statistics in the salamander retina.
Tkačik, Gašper; Ghosh, Anandamohan; Schneidman, Elad; Segev, Ronen
2014-01-01
Adaptation in the retina is thought to optimize the encoding of natural light signals into sequences of spikes sent to the brain. While adaptive changes in retinal processing to the variations of the mean luminance level and second-order stimulus statistics have been documented before, no such measurements have been performed when higher-order moments of the light distribution change. We therefore measured the ganglion cell responses in the tiger salamander retina to controlled changes in the second (contrast), third (skew) and fourth (kurtosis) moments of the light intensity distribution of spatially uniform temporally independent stimuli. The skew and kurtosis of the stimuli were chosen to cover the range observed in natural scenes. We quantified adaptation in ganglion cells by studying linear-nonlinear models that capture well the retinal encoding properties across all stimuli. We found that the encoding properties of retinal ganglion cells change only marginally when higher-order statistics change, compared to the changes observed in response to the variation in contrast. By analyzing optimal coding in LN-type models, we showed that neurons can maintain a high information rate without large dynamic adaptation to changes in skew or kurtosis. This is because, for uncorrelated stimuli, spatio-temporal summation within the receptive field averages away non-gaussian aspects of the light intensity distribution.
How do rigid-lid assumption affect LES simulation results at high Reynolds flows?
NASA Astrophysics Data System (ADS)
Khosronejad, Ali; Farhadzadeh, Ali; SBU Collaboration
2017-11-01
This research is motivated by the work of Kara et al., JHE, 2015. They employed LES to model flow around a model of abutment at a Re number of 27,000. They showed that first-order turbulence characteristics obtained by rigid-lid (RL) assumption compares fairly well with those of level-set (LS) method. Concerning the second-order statistics, however, their simulation results showed a significant dependence on the method used to describe the free surface. This finding can have important implications for open channel flow modeling. The Reynolds number for typical open channel flows, however, could be much larger than that of Kara et al.'s test case. Herein, we replicate the reported study by augmenting the geometric and hydraulic scales to reach a Re number of one order of magnitude larger ( 200,000). The Virtual Flow Simulator (VFS-Geophysics) model in its LES mode is used to simulate the test case using both RL and LS methods. The computational results are validated using measured flow and free-surface data from our laboratory experiments. Our goal is to investigate the effects of RL assumption on both first-order and second order statistics at high Reynolds numbers that occur in natural waterways. Acknowledgment: Computational resources are provided by the Center of Excellence in Wireless & Information Technology (CEWIT) of Stony Brook University.
Sb2Te3 and Its Superlattices: Optimization by Statistical Design.
Behera, Jitendra K; Zhou, Xilin; Ranjan, Alok; Simpson, Robert E
2018-05-02
The objective of this work is to demonstrate the usefulness of fractional factorial design for optimizing the crystal quality of chalcogenide van der Waals (vdW) crystals. We statistically analyze the growth parameters of highly c axis oriented Sb 2 Te 3 crystals and Sb 2 Te 3 -GeTe phase change vdW heterostructured superlattices. The statistical significance of the growth parameters of temperature, pressure, power, buffer materials, and buffer layer thickness was found by fractional factorial design and response surface analysis. Temperature, pressure, power, and their second-order interactions are the major factors that significantly influence the quality of the crystals. Additionally, using tungsten rather than molybdenum as a buffer layer significantly enhances the crystal quality. Fractional factorial design minimizes the number of experiments that are necessary to find the optimal growth conditions, resulting in an order of magnitude improvement in the crystal quality. We highlight that statistical design of experiment methods, which is more commonly used in product design, should be considered more broadly by those designing and optimizing materials.
Discipline and Order in American High Schools. Contractor Report.
ERIC Educational Resources Information Center
DiPrete, Thomas A.; And Others
Discipline and misbehavior in American high schools are the focus of this analysis of data from the first wave (1980) of a longitudinal study of over 30,000 sophomores and over 28,000 seniors. A summary of the findings shows that differences between urban and other schools are usually statistically insignificant when other school and student…
High-order statistics of weber local descriptors for image representation.
Han, Xian-Hua; Chen, Yen-Wei; Xu, Gang
2015-06-01
Highly discriminant visual features play a key role in different image classification applications. This study aims to realize a method for extracting highly-discriminant features from images by exploring a robust local descriptor inspired by Weber's law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber's law, and then explore a local patch, called micro-Texton, in the transformed domain as Weber local descriptor (WLD). Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the WLD space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for images. In order to validate the efficiency of the proposed strategy, we apply three different image classification applications including texture, food images and HEp-2 cell pattern recognition, which validates that our proposed strategy has advantages over the state-of-the-art approaches.
NASA Astrophysics Data System (ADS)
Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong
2016-03-01
In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.
Higher Order Cumulant Studies of Ocean Surface Random Fields from Satellite Altimeter Data
NASA Technical Reports Server (NTRS)
Cheng, B.
1996-01-01
Higher order statistics, especially 2nd order statistics, have been used to study ocean processes for many years in the past, and occupy an appreciable part of the research literature on physical oceanography. They in turn form part of a much larger field of study in statistical fluid mechanics.
Compensation for first-order polarization-mode dispersion by using a novel tunable compensator
NASA Astrophysics Data System (ADS)
Qiu, Feng; Ning, Tigang; Pei, Shanshan; Xing, Yujun; Jian, Shuisheng
2005-01-01
Polarization-related impairments have become a critical issue for high-data-rate optical systems, particularly when considering polarization-mode dispersion (PMD). Consequently, compensation of PMD, especially for the first-order PMD is necessary to maintain adequate performance in long-haul systems at a high bit rate of 10 Gb/s or beyond. In this paper, we successfully demonstrated automatic and tunable compensation for first-order polarization-mode dispersion. Furthermore, we reported the statistical assessment of this tunable compensator at 10 Gbit/s. Experimental results, including bit error rate measurements, are successfully compared with theory, therefore demonstrating the compensator efficiency at 10 Gbit/s. The first-order PMD was max 274 ps before PMD compensation, and it was lower than 7ps after PMD compensation.
A perceptual space of local image statistics.
Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M
2015-12-01
Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.
A perceptual space of local image statistics
Victor, Jonathan D.; Thengone, Daniel J.; Rizvi, Syed M.; Conte, Mary M.
2015-01-01
Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice – a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14 min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4 min. In sum, local image statistics forms a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. PMID:26130606
Universal Entropy of Word Ordering Across Linguistic Families
Montemurro, Marcelo A.; Zanette, Damián H.
2011-01-01
Background The language faculty is probably the most distinctive feature of our species, and endows us with a unique ability to exchange highly structured information. In written language, information is encoded by the concatenation of basic symbols under grammatical and semantic constraints. As is also the case in other natural information carriers, the resulting symbolic sequences show a delicate balance between order and disorder. That balance is determined by the interplay between the diversity of symbols and by their specific ordering in the sequences. Here we used entropy to quantify the contribution of different organizational levels to the overall statistical structure of language. Methodology/Principal Findings We computed a relative entropy measure to quantify the degree of ordering in word sequences from languages belonging to several linguistic families. While a direct estimation of the overall entropy of language yielded values that varied for the different families considered, the relative entropy quantifying word ordering presented an almost constant value for all those families. Conclusions/Significance Our results indicate that despite the differences in the structure and vocabulary of the languages analyzed, the impact of word ordering in the structure of language is a statistical linguistic universal. PMID:21603637
Tsatrafyllis, N; Kominis, I K; Gonoskov, I A; Tzallas, P
2017-04-27
High-order harmonics in the extreme-ultraviolet spectral range, resulting from the strong-field laser-atom interaction, have been used in a broad range of fascinating applications in all states of matter. In the majority of these studies the harmonic generation process is described using semi-classical theories which treat the electromagnetic field of the driving laser pulse classically without taking into account its quantum nature. In addition, for the measurement of the generated harmonics, all the experiments require diagnostics in the extreme-ultraviolet spectral region. Here by treating the driving laser field quantum mechanically we reveal the quantum-optical nature of the high-order harmonic generation process by measuring the photon number distribution of the infrared light exiting the harmonic generation medium. It is found that the high-order harmonics are imprinted in the photon number distribution of the infrared light and can be recorded without the need of a spectrometer in the extreme-ultraviolet.
Tsatrafyllis, N.; Kominis, I. K.; Gonoskov, I. A.; Tzallas, P.
2017-01-01
High-order harmonics in the extreme-ultraviolet spectral range, resulting from the strong-field laser-atom interaction, have been used in a broad range of fascinating applications in all states of matter. In the majority of these studies the harmonic generation process is described using semi-classical theories which treat the electromagnetic field of the driving laser pulse classically without taking into account its quantum nature. In addition, for the measurement of the generated harmonics, all the experiments require diagnostics in the extreme-ultraviolet spectral region. Here by treating the driving laser field quantum mechanically we reveal the quantum-optical nature of the high-order harmonic generation process by measuring the photon number distribution of the infrared light exiting the harmonic generation medium. It is found that the high-order harmonics are imprinted in the photon number distribution of the infrared light and can be recorded without the need of a spectrometer in the extreme-ultraviolet. PMID:28447616
Yu, Xiaojin; Liu, Pei; Min, Jie; Chen, Qiguang
2009-01-01
To explore the application of regression on order statistics (ROS) in estimating nondetects for food exposure assessment. Regression on order statistics was adopted in analysis of cadmium residual data set from global food contaminant monitoring, the mean residual was estimated basing SAS programming and compared with the results from substitution methods. The results show that ROS method performs better obviously than substitution methods for being robust and convenient for posterior analysis. Regression on order statistics is worth to adopt,but more efforts should be make for details of application of this method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kooperman, Gabriel J.; Pritchard, Michael S.; Burt, Melissa A.
Changes in the character of rainfall are assessed using a holistic set of statistics based on rainfall frequency and amount distributions in climate change experiments with three conventional and superparameterized versions of the Community Atmosphere Model (CAM and SPCAM). Previous work has shown that high-order statistics of present-day rainfall intensity are significantly improved with superparameterization, especially in regions of tropical convection. Globally, the two modeling approaches project a similar future increase in mean rainfall, especially across the Inter-Tropical Convergence Zone (ITCZ) and at high latitudes, but over land, SPCAM predicts a smaller mean change than CAM. Changes in high-order statisticsmore » are similar at high latitudes in the two models but diverge at lower latitudes. In the tropics, SPCAM projects a large intensification of moderate and extreme rain rates in regions of organized convection associated with the Madden Julian Oscillation, ITCZ, monsoons, and tropical waves. In contrast, this signal is missing in all versions of CAM, which are found to be prone to predicting increases in the amount but not intensity of moderate rates. Predictions from SPCAM exhibit a scale-insensitive behavior with little dependence on horizontal resolution for extreme rates, while lower resolution (~2°) versions of CAM are not able to capture the response simulated with higher resolution (~1°). Furthermore, moderate rain rates analyzed by the “amount mode” and “amount median” are found to be especially telling as a diagnostic for evaluating climate model performance and tracing future changes in rainfall statistics to tropical wave modes in SPCAM.« less
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
NASA Astrophysics Data System (ADS)
Rooms, F.; Camet, S.; Curis, J. F.
2010-02-01
A new technology of deformable mirror will be presented. Based on magnetic actuators, these deformable mirrors feature record strokes (more than +/- 45μm of astigmatism and focus correction) with an optimized temporal behavior. Furthermore, the development has been made in order to have a large density of actuators within a small clear aperture (typically 52 actuators within a diameter of 9.0mm). We will present the key benefits of this technology for vision science: simultaneous correction of low and high order aberrations, AO-SLO image without artifacts due to the membrane vibration, optimized control, etc. Using recent papers published by Doble, Thibos and Miller, we show the performances that can be achieved by various configurations using statistical approach. The typical distribution of wavefront aberrations (both the low order aberration (LOA) and high order aberration (HOA)) have been computed and the correction applied by the mirror. We compare two configurations of deformable mirrors (52 and 97 actuators) and highlight the influence of the number of actuators on the fitting error, the photon noise error and the effective bandwidth of correction.
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Reischauer, Carolin; Patzwahl, René; Koh, Dow-Mu; Froehlich, Johannes M; Gutzeit, Andreas
2018-04-01
To evaluate whole-lesion volumetric texture analysis of apparent diffusion coefficient (ADC) maps for assessing treatment response in prostate cancer bone metastases. Texture analysis is performed in 12 treatment-naïve patients with 34 metastases before treatment and at one, two, and three months after the initiation of androgen deprivation therapy. Four first-order and 19 second-order statistical texture features are computed on the ADC maps in each lesion at every time point. Repeatability, inter-patient variability, and changes in the feature values under therapy are investigated. Spearman rank's correlation coefficients are calculated across time to demonstrate the relationship between the texture features and the serum prostate specific antigen (PSA) levels. With few exceptions, the texture features exhibited moderate to high precision. At the same time, Friedman's tests revealed that all first-order and second-order statistical texture features changed significantly in response to therapy. Thereby, the majority of texture features showed significant changes in their values at all post-treatment time points relative to baseline. Bivariate analysis detected significant correlations between the great majority of texture features and the serum PSA levels. Thereby, three first-order and six second-order statistical features showed strong correlations with the serum PSA levels across time. The findings in the present work indicate that whole-tumor volumetric texture analysis may be utilized for response assessment in prostate cancer bone metastases. The approach may be used as a complementary measure for treatment monitoring in conjunction with averaged ADC values. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albano Farias, L.; Stephany, J.
2010-12-15
We analyze the statistics of observables in continuous-variable (CV) quantum teleportation in the formalism of the characteristic function. We derive expressions for average values of output-state observables, in particular, cumulants which are additive in terms of the input state and the resource of teleportation. Working with a general class of teleportation resources, the squeezed-bell-like states, which may be optimized in a free parameter for better teleportation performance, we discuss the relation between resources optimal for fidelity and those optimal for different observable averages. We obtain the values of the free parameter of the squeezed-bell-like states which optimize the central momentamore » and cumulants up to fourth order. For the cumulants the distortion between in and out states due to teleportation depends only on the resource. We obtain optimal parameters {Delta}{sub (2)}{sup opt} and {Delta}{sub (4)}{sup opt} for the second- and fourth-order cumulants, which do not depend on the squeezing of the resource. The second-order central momenta, which are equal to the second-order cumulants, and the photon number average are also optimized by the resource with {Delta}{sub (2)}{sup opt}. We show that the optimal fidelity resource, which has been found previously to depend on the characteristics of input, approaches for high squeezing to the resource that optimizes the second-order momenta. A similar behavior is obtained for the resource that optimizes the photon statistics, which is treated here using the sum of the squared differences in photon probabilities of input versus output states as the distortion measure. This is interpreted naturally to mean that the distortions associated with second-order momenta dominate the behavior of the output state for large squeezing of the resource. Optimal fidelity resources and optimal photon statistics resources are compared, and it is shown that for mixtures of Fock states both resources are equivalent.« less
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
High intensity click statistics from a 10 × 10 avalanche photodiode array
NASA Astrophysics Data System (ADS)
Kröger, Johannes; Ahrens, Thomas; Sperling, Jan; Vogel, Werner; Stolz, Heinrich; Hage, Boris
2017-11-01
Photon-number measurements are a fundamental technique for the discrimination and characterization of quantum states of light. Beyond the abilities of state-of-the-art devices, we present measurements with an array of 100 avalanche photodiodes exposed to photon-numbers ranging from well below to significantly above one photon per diode. Despite each single diode only discriminating between zero and non-zero photon-numbers we were able to extract a second order moment, which acts as a nonclassicality indicator. We demonstrate a vast enhancement of the applicable intensity range by two orders of magnitude relative to the standard application of such devices. It turns out that the probabilistic mapping of arbitrary photon-numbers on a finite number of registered clicks is not per se a disadvantage compared with true photon counters. Such detector arrays can bridge the gap between single-photon and linear detection, by investigation of the click statistics, without the necessity of photon statistics reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pingenot, J; Rieben, R; White, D
2004-12-06
We present a computational study of signal propagation and attenuation of a 200 MHz dipole antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The simulation is performed for a series of random meshes in order to generate statistical data for the propagation and attenuation properties of the cave environment. Results for the power spectral density and phase ofmore » the electric field vector components are presented and discussed.« less
MRI textures as outcome predictor for Gamma Knife radiosurgery on vestibular schwannoma
NASA Astrophysics Data System (ADS)
Langenhuizen, P. P. J. H.; Legters, M. J. W.; Zinger, S.; Verheul, H. B.; Leenstra, S.; de With, P. H. N.
2018-02-01
Vestibular schwannomas (VS) are benign brain tumors that can be treated with high-precision focused radiation with the Gamma Knife in order to stop tumor growth. Outcome prediction of Gamma Knife radiosurgery (GKRS) treatment can help in determining whether GKRS will be effective on an individual patient basis. However, at present, prognostic factors of tumor control after GKRS for VS are largely unknown, and only clinical factors, such as size of the tumor at treatment and pre-treatment growth rate of the tumor, have been considered thus far. This research aims at outcome prediction of GKRS by means of quantitative texture feature analysis on conventional MRI scans. We compute first-order statistics and features based on gray-level co- occurrence (GLCM) and run-length matrices (RLM), and employ support vector machines and decision trees for classification. In a clinical dataset, consisting of 20 tumors showing treatment failure and 20 tumors exhibiting treatment success, we have discovered that the second-order statistical metrics distilled from GLCM and RLM are suitable for describing texture, but are slightly outperformed by simple first-order statistics, like mean, standard deviation and median. The obtained prediction accuracy is about 85%, but a final choice of the best feature can only be made after performing more extensive analyses on larger datasets. In any case, this work provides suitable texture measures for successful prediction of GKRS treatment outcome for VS.
Application of higher-order cepstral techniques in problems of fetal heart signal extraction
NASA Astrophysics Data System (ADS)
Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.
1996-10-01
Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baers, L.B.; Gutierrez, T.R.; Mendoza, R.A.
1993-08-01
The second (conventional variance or Campbell signal) , , , the third , and the modified fourth order [minus] 3*[sup 2] etc. central signal moments associated with the amplified (K) and filtered currents [i[sub 1], i[sub 2], x = K * (i[sub 2]-),] from two electrodes of an ex-core neutron sensitive fission detector have been measured versus the reactor power of the 1 MW TRIGA reactor in Mexico City. Two channels of a high speed (400 kHz) multiplexing data sampler and A/D converter with 12 bit resolution and one megawords buffer memory were used. The data were further retrieved intomore » a PC and estimates for auto- and cross-correlation moments up to the fifth order, coherence (/[radical]), skewness (/([radical]/)[sup 3]), excess (/[sup 2] - 3) etc. quantities were calculated off-line. A five mode operation of the detector was achieved including the conventional counting rates and currents in agreement with the theory and the authors previous results with analogue techniques. The signals were proportional to the neutron flux and reactor power in some flux ranges. The suppression of background noise is improved and the lower limit of the measurement range is extended as the order of moment is increased, in agreement with the theory. On the other hand the statistical uncertainty is increased. At increasing flux levels it was statistically more difficult to obtain flux estimates based on the higher order ([>=]3) moments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
Li, Yaohang; Liu, Hui; Rata, Ionel; Jakobsson, Eric
2013-02-25
The rapidly increasing number of protein crystal structures available in the Protein Data Bank (PDB) has naturally made statistical analyses feasible in studying complex high-order inter-residue correlations. In this paper, we report a context-based secondary structure potential (CSSP) for assessing the quality of predicted protein secondary structures generated by various prediction servers. CSSP is a sequence-position-specific knowledge-based potential generated based on the potentials of mean force approach, where high-order inter-residue interactions are taken into consideration. The CSSP potential is effective in identifying secondary structure predictions with good quality. In 56% of the targets in the CB513 benchmark, the optimal CSSP potential is able to recognize the native secondary structure or a prediction with Q3 accuracy higher than 90% as best scored in the predicted secondary structures generated by 10 popularly used secondary structure prediction servers. In more than 80% of the CB513 targets, the predicted secondary structures with the lowest CSSP potential values yield higher than 80% Q3 accuracy. Similar performance of CSSP is found on the CASP9 targets as well. Moreover, our computational results also show that the CSSP potential using triplets outperforms the CSSP potential using doublets and is currently better than the CSSP potential using quartets.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Mehrparvar, Amir Houshang; Mirmohammadi, Seyyed Jalil; Hafezi, Rahmatollah; Mostaghaci, Mehrdad; Davari, Mohammad Hossein
2015-05-01
Anthropometric dimensions of the end users should be measured in order to create a basis for manufacturing of different products. This study was designed to measure some static anthropometric dimensions in Iranian high school students, considering ethnic differences. Nineteen static anthropometric dimensions of high school students were measured and compared among different Iranian ethnicities (Fars, Turk, Kurd, Lor, Baluch, and Arab) and different genders. In this study, 9,476 subjects (4,703 boys and 4,773 girls) ages 15 to 18 years in six ethnicities were assessed. The difference among ethnicities was statistically significant for all dimensions (p values < .001 for each dimension). This study showed statistically significant differences in 19 static anthropometric dimensions among high school students regarding gender, age, and ethnicity. © 2014, Human Factors and Ergonomics Society.
Multi-pulse multi-delay (MPMD) multiple access modulation for UWB
Dowla, Farid U.; Nekoogar, Faranak
2007-03-20
A new modulation scheme in UWB communications is introduced. This modulation technique utilizes multiple orthogonal transmitted-reference pulses for UWB channelization. The proposed UWB receiver samples the second order statistical function at both zero and non-zero lags and matches the samples to stored second order statistical functions, thus sampling and matching the shape of second order statistical functions rather than just the shape of the received pulses.
ERIC Educational Resources Information Center
Green, Jeffrey J.; Stone, Courtenay C.; Zegeye, Abera; Charles, Thomas A.
2009-01-01
Because statistical analysis requires the ability to use mathematics, students typically are required to take one or more prerequisite math courses prior to enrolling in the business statistics course. Despite these math prerequisites, however, many students find it difficult to learn business statistics. In this study, we use an ordered probit…
Is fertility falling in Zimbabwe?
Udjo, E O
1996-01-01
With an unequalled contraceptive prevalence rate in sub-Saharan Africa, of 43% among currently married women in Zimbabwe, the Central Statistical Office (1989) observed that fertility has declined sharply in recent years. Using data from several surveys on Zimbabwe, especially the birth histories of the Zimbabwe Demographic and Health Survey, this study examines fertility trends in Zimbabwe. The results show that the fertility decline in Zimbabwe is modest and that the decline is concentrated among high order births. Multivariate analysis did not show a statistically significant effect of contraception on fertility, partly because a high proportion of Zimbabwean women in the reproductive age group never use contraception due to prevailing pronatalist attitudes in the country.
Is Statistical Learning Constrained by Lower Level Perceptual Organization?
Emberson, Lauren L.; Liu, Ran; Zevin, Jason D.
2013-01-01
In order for statistical information to aid in complex developmental processes such as language acquisition, learning from higher-order statistics (e.g. across successive syllables in a speech stream to support segmentation) must be possible while perceptual abilities (e.g. speech categorization) are still developing. The current study examines how perceptual organization interacts with statistical learning. Adult participants were presented with multiple exemplars from novel, complex sound categories designed to reflect some of the spectral complexity and variability of speech. These categories were organized into sequential pairs and presented such that higher-order statistics, defined based on sound categories, could support stream segmentation. Perceptual similarity judgments and multi-dimensional scaling revealed that participants only perceived three perceptual clusters of sounds and thus did not distinguish the four experimenter-defined categories, creating a tension between lower level perceptual organization and higher-order statistical information. We examined whether the resulting pattern of learning is more consistent with statistical learning being “bottom-up,” constrained by the lower levels of organization, or “top-down,” such that higher-order statistical information of the stimulus stream takes priority over the perceptual organization, and perhaps influences perceptual organization. We consistently find evidence that learning is constrained by perceptual organization. Moreover, participants generalize their learning to novel sounds that occupy a similar perceptual space, suggesting that statistical learning occurs based on regions of or clusters in perceptual space. Overall, these results reveal a constraint on learning of sound sequences, such that statistical information is determined based on lower level organization. These findings have important implications for the role of statistical learning in language acquisition. PMID:23618755
A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na
2013-01-01
We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.
A Developmental Approach to Machine Learning?
Smith, Linda B.; Slone, Lauren K.
2017-01-01
Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order – with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines. PMID:29259573
NASA Astrophysics Data System (ADS)
Barnea, A. Ronny; Cheshnovsky, Ori; Even, Uzi
2018-02-01
Interference experiments have been paramount in our understanding of quantum mechanics and are frequently the basis of testing the superposition principle in the framework of quantum theory. In recent years, several studies have challenged the nature of wave-function interference from the perspective of Born's rule—namely, the manifestation of so-called high-order interference terms in a superposition generated by diffraction of the wave functions. Here we present an experimental test of multipath interference in the diffraction of metastable helium atoms, with large-number counting statistics, comparable to photon-based experiments. We use a variation of the original triple-slit experiment and accurate single-event counting techniques to provide a new experimental bound of 2.9 ×10-5 on the statistical deviation from the commonly approximated null third-order interference term in Born's rule for matter waves. Our value is on the order of the maximal contribution predicted for multipath trajectories by Feynman path integrals.
2018-01-01
Natural hazards (events that may cause actual disasters) are established in the literature as major causes of various massive and destructive problems worldwide. The occurrences of earthquakes, floods and heat waves affect millions of people through several impacts. These include cases of hospitalisation, loss of lives and economic challenges. The focus of this study was on the risk reduction of the disasters that occur because of extremely high temperatures and heat waves. Modelling average maximum daily temperature (AMDT) guards against the disaster risk and may also help countries towards preparing for extreme heat. This study discusses the use of the r largest order statistics approach of extreme value theory towards modelling AMDT over the period of 11 years, that is, 2000–2010. A generalised extreme value distribution for r largest order statistics is fitted to the annual maxima. This is performed in an effort to study the behaviour of the r largest order statistics. The method of maximum likelihood is used in estimating the target parameters and the frequency of occurrences of the hottest days is assessed. The study presents a case study of South Africa in which the data for the non-winter season (September–April of each year) are used. The meteorological data used are the AMDT that are collected by the South African Weather Service and provided by Eskom. The estimation of the shape parameter reveals evidence of a Weibull class as an appropriate distribution for modelling AMDT in South Africa. The extreme quantiles for specified return periods are estimated using the quantile function and the best model is chosen through the use of the deviance statistic with the support of the graphical diagnostic tools. The Entropy Difference Test (EDT) is used as a specification test for diagnosing the fit of the models to the data.
The use of higher-order statistics in rapid object categorization in natural scenes.
Banno, Hayaki; Saiki, Jun
2015-02-04
We can rapidly and efficiently recognize many types of objects embedded in complex scenes. What information supports this object recognition is a fundamental question for understanding our visual processing. We investigated the eccentricity-dependent role of shape and statistical information for ultrarapid object categorization, using the higher-order statistics proposed by Portilla and Simoncelli (2000). Synthesized textures computed by their algorithms have the same higher-order statistics as the originals, while the global shapes were destroyed. We used the synthesized textures to manipulate the availability of shape information separately from the statistics. We hypothesized that shape makes a greater contribution to central vision than to peripheral vision and that statistics show the opposite pattern. Results did not show contributions clearly biased by eccentricity. Statistical information demonstrated a robust contribution not only in peripheral but also in central vision. For shape, the results supported the contribution in both central and peripheral vision. Further experiments revealed some interesting properties of the statistics. They are available for a limited time, attributable to the presence or absence of animals without shape, and predict how easily humans detect animals in original images. Our data suggest that when facing the time constraint of categorical processing, higher-order statistics underlie our significant performance for rapid categorization, irrespective of eccentricity. © 2015 ARVO.
NASA Astrophysics Data System (ADS)
Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun
2014-03-01
Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.
Chemistry and Students with Blindness: The Hurdles Are Not What You Think
ERIC Educational Resources Information Center
Lewis, Amy L. Micklos
2012-01-01
Statistics have shown that individuals with disabilities are underrepresented in the science, technology, engineering, and mathematics (STEM) fields. This work focused on exploring how three students with blindness enrolled in a full-inclusion high-school chemistry class experienced and conceptualized content in order to inform educators,…
De Groote, Sandra L; Blecic, Deborah D; Martin, Kristin
2013-04-01
Libraries require efficient and reliable methods to assess journal use. Vendors provide complete counts of articles retrieved from their platforms. However, if a journal is available on multiple platforms, several sets of statistics must be merged. Link-resolver reports merge data from all platforms into one report but only record partial use because users can access library subscriptions from other paths. Citation data are limited to publication use. Vendor, link-resolver, and local citation data were examined to determine correlation. Because link-resolver statistics are easy to obtain, the study library especially wanted to know if they correlate highly with the other measures. Vendor, link-resolver, and local citation statistics for the study institution were gathered for health sciences journals. Spearman rank-order correlation coefficients were calculated. There was a high positive correlation between all three data sets, with vendor data commonly showing the highest use. However, a small percentage of titles showed anomalous results. Link-resolver data correlate well with vendor and citation data, but due to anomalies, low link-resolver data would best be used to suggest titles for further evaluation using vendor data. Citation data may not be needed as it correlates highly with other measures.
An Improved 360 Degree and Order Model of Venus Topography
NASA Technical Reports Server (NTRS)
Rappaport, Nicole J.; Konopliv, Alex S.; Kucinskas, Algis B.; Ford, Peter G.
1999-01-01
We present an improved 360 degree and order spherical harmonic solution for Venus' topography. The new model uses the most recent set of Venus altimetry data with spacecraft positions derived from a recent high resolution gravity model. Geometric analysis indicates that the offset between the center of mass and center of figure of Venus is about 10 times smaller than that for the Earth, the Moon, or Mars. Statistical analyses confirm that the RMS topography follows a power law over the central part of the spectrum. Compared to the previous topography model, the new model is more highly correlated with Venus' harmonic gravity field.
Estimating order statistics of network degrees
NASA Astrophysics Data System (ADS)
Chu, J.; Nadarajah, S.
2018-01-01
We model the order statistics of network degrees of big data sets by a range of generalised beta distributions. A three parameter beta distribution due to Libby and Novick (1982) is shown to give the best overall fit for at least four big data sets. The fit of this distribution is significantly better than the fit suggested by Olhede and Wolfe (2012) across the whole range of order statistics for all four data sets.
Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqui; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions. Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz model as well as more realistic models of the means and atmosphere. A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter situations to allow for correct update of the ensemble members. The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to be quite puzzling in that results state estimates are worse than for their filter analogue. In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use the Lorenz model to test and compare the behavior of a variety of implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
The internal resistance of a PEM fuel cell depends on the operation conditions and on the current delivered by the cell. This work's goal is to obtain a semiempirical model able to reproduce the effect of the operation current on the internal resistance of an individual cell of a commercial PEM fuel cell stack; and to perform a statistical analysis in order to study the effect of the operation temperature and the inlet humidities on the parameters of the model. First, the internal resistance of the individual fuel cell operating in different operation conditions was experimentally measured for different DC currents, using the high frequency intercept of the impedance spectra. Then, a semiempirical model based on Springer and co-workers' model was proposed. This model is able to successfully reproduce the experimental trends. Subsequently, the curves of resistance versus DC current obtained for different operation conditions were fitted to the semiempirical model, and an analysis of variance (ANOVA) was performed in order to determine which factors have a statistically significant effect on each model parameter. Finally, a response surface method was applied in order to obtain a regression model.
Image encryption based on a delayed fractional-order chaotic logistic system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na
2012-05-01
A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.
A Stochastic Fractional Dynamics Model of Space-time Variability of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Travis, James E.
2013-01-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.
Kinjo, Erika Reime; Rodríguez, Pedro Xavier Royero; Dos Santos, Bianca Araújo; Higa, Guilherme Shigueto Vilar; Ferraz, Mariana Sacrini Ayres; Schmeltzer, Christian; Rüdiger, Sten; Kihara, Alexandre Hiroaki
2018-05-01
Epilepsy is a disorder of the brain characterized by the predisposition to generate recurrent unprovoked seizures, which involves reshaping of neuronal circuitries based on intense neuronal activity. In this review, we first detailed the regulation of plasticity-associated genes, such as ARC, GAP-43, PSD-95, synapsin, and synaptophysin. Indeed, reshaping of neuronal connectivity after the primary, acute epileptogenesis event increases the excitability of the temporal lobe. Herein, we also discussed the heterogeneity of neuronal populations regarding the number of synaptic connections, which in the theoretical field is commonly referred as degree. Employing integrate-and-fire neuronal model, we determined that in addition to increased synaptic strength, degree correlations might play essential and unsuspected roles in the control of network activity. Indeed, assortativity, which can be described as a condition where high-degree correlations are observed, increases the excitability of neural networks. In this review, we summarized recent topics in the field, and data were discussed according to newly developed or unusual tools, as provided by mathematical graph analysis and high-order statistics. With this, we were able to present new foundations for the pathological activity observed in temporal lobe epilepsy.
NASA Astrophysics Data System (ADS)
Liu, Ke; Nissinen, Jaakko; Slager, Robert-Jan; Wu, Kai; Zaanen, Jan
2016-10-01
The physics of nematic liquid crystals has been the subject of intensive research since the late 19th century. However, the focus of this pursuit has been centered around uniaxial and biaxial nematics associated with constituents bearing a D∞ h or D2 h symmetry, respectively. In view of general symmetries, however, these are singularly special since nematic order can in principle involve any point-group symmetry. Given the progress in tailoring nanoparticles with particular shapes and interactions, this vast family of "generalized nematics" might become accessible in the laboratory. Little is known because the order parameter theories associated with the highly symmetric point groups are remarkably complicated, involving tensor order parameters of high rank. Here, we show that the generic features of the statistical physics of such systems can be studied in a highly flexible and efficient fashion using a mathematical tool borrowed from high-energy physics: discrete non-Abelian gauge theory. Explicitly, we construct a family of lattice gauge models encapsulating nematic ordering of general three-dimensional point-group symmetries. We find that the most symmetrical generalized nematics are subjected to thermal fluctuations of unprecedented severity. As a result, novel forms of fluctuation phenomena become possible. In particular, we demonstrate that a vestigial phase carrying no more than chiral order becomes ubiquitous departing from high point-group symmetry chiral building blocks, such as I , O , and T symmetric matter.
A Stochastic Fractional Dynamics Model of Rainfall Statistics
NASA Astrophysics Data System (ADS)
Kundu, Prasun; Travis, James
2013-04-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.
Statistical methodologies for the control of dynamic remapping
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Nicol, D. M.
1986-01-01
Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.
Collective flow measurements with HADES in Au+Au collisions at 1.23A GeV
NASA Astrophysics Data System (ADS)
Kardan, Behruz; Hades Collaboration
2017-11-01
HADES has a large acceptance combined with a good mass-resolution and therefore allows the study of dielectron and hadron production in heavy-ion collisions with unprecedented precision. With the statistics of seven billion Au-Au collisions at 1.23A GeV recorded in 2012, the investigation of higher-order flow harmonics is possible. At the BEVALAC and SIS18 directed and elliptic flow has been measured for pions, charged kaons, protons, neutrons and fragments, but higher-order harmonics have not yet been studied. They provide additional important information on the properties of the dense hadronic medium produced in heavy-ion collisions. We present here a high-statistics, multidifferential measurement of v1 and v2 for protons in Au+Au collisions at 1.23A GeV.
NASA Astrophysics Data System (ADS)
Lin, Shu; Wang, Rui; Xia, Ning; Li, Yongdong; Liu, Chunliang
2018-01-01
Statistical multipactor theories are critical prediction approaches for multipactor breakdown determination. However, these approaches still require a negotiation between the calculation efficiency and accuracy. This paper presents an improved stationary statistical theory for efficient threshold analysis of two-surface multipactor. A general integral equation over the distribution function of the electron emission phase with both the single-sided and double-sided impacts considered is formulated. The modeling results indicate that the improved stationary statistical theory can not only obtain equally good accuracy of multipactor threshold calculation as the nonstationary statistical theory, but also achieve high calculation efficiency concurrently. By using this improved stationary statistical theory, the total time consumption in calculating full multipactor susceptibility zones of parallel plates can be decreased by as much as a factor of four relative to the nonstationary statistical theory. It also shows that the effect of single-sided impacts is indispensable for accurate multipactor prediction of coaxial lines and also more significant for the high order multipactor. Finally, the influence of secondary emission yield (SEY) properties on the multipactor threshold is further investigated. It is observed that the first cross energy and the energy range between the first cross and the SEY maximum both play a significant role in determining the multipactor threshold, which agrees with the numerical simulation results in the literature.
f-lacunary statistical convergence of order (α, β)
NASA Astrophysics Data System (ADS)
Sengul, Hacer; Isik, Mahmut; Et, Mikail
2017-09-01
The main purpose of this paper is to introduce the concepts of f-lacunary statistical convergence of order (α, β) and strong f-lacunary summability of order (α, β) of sequences of real numbers for 0 <α ≤ β ≤ 1, where f is an unbounded modulus.
Massive parallelization of serial inference algorithms for a complex generalized linear model
Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David
2014-01-01
Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363
López-Carr, David; Pricope, Narcisa G.; Aukema, Juliann E.; Jankowska, Marta M.; Funk, Christopher C.; Husak, Gregory J.; Michaelsen, Joel C.
2014-01-01
We present an integrative measure of exposure and sensitivity components of vulnerability to climatic and demographic change for the African continent in order to identify “hot spots” of high potential population vulnerability. Getis-Ord Gi* spatial clustering analyses reveal statistically significant locations of spatio-temporal precipitation decline coinciding with high population density and increase. Statistically significant areas are evident, particularly across central, southern, and eastern Africa. The highly populated Lake Victoria basin emerges as a particularly salient hot spot. People located in the regions highlighted in this analysis suffer exceptionally high exposure to negative climate change impacts (as populations increase on lands with decreasing rainfall). Results may help inform further hot spot mapping and related research on demographic vulnerabilities to climate change. Results may also inform more suitable geographical targeting of policy interventions across the continent.
Th unnatural order of things: A history of the high school science sequence
NASA Astrophysics Data System (ADS)
Robbins, Dennis M.
Historical studies of US high school science education are rare. This study examines the historical origins of a unique characteristic of the secondary science curriculum, the Biology-Chemistry-Physics (B-C-P) order of courses. Statements from scientists, educators and the media claim that B-C-P has been the traditional curriculum sequence for over a century and can be traced back to the influential educational commission known as the Committee of Ten (CoT) of 1893. This study examines the history of the ordering of high school science subjects over the last 150 years. The reports and primary documents of important national educational commissions, such as the CoT, were searched for their recommendations on secondary science, particularly on course ordering. These recommendations were then compared to national, state and local statistical data on subject offerings and student enrollments to measure the effect of these national commissions on school policy. This study concludes that the Committee of Ten did not create B-P-C. The CoT made six recommendations, five placed Chemistry before Physics (P-C). One recommendation for C-P met with strong disagreement because it was thought an illogical order. Biology as a "uniform" course did not exist at this time and so the CoT made no recommendations for its grade placement. Statistical data shows that B-C-P evolved over many decades. From 1860 up to 1920 most schools used a P-C curriculum believing Physics was a foundational prerequisite of Chemistry. Biology was introduced in the early 1900s and it assumed a position before the physical sciences. Through the 1920s Chemistry and Physics were placed equally likely in 11th or 12 th grades and Biology was in the 10th grade. After World War II, B-C-P became the dominant pattern, exhibited in over 90% of schools. But up to this point in time no educational body or national commission had recommended B-C-P. The Biology-Chemistry-Physics order of courses is a product of many historical accidents and not the result of educational planning for the US high school curriculum.
Quantum order, entanglement and localization in many-body systems
NASA Astrophysics Data System (ADS)
Khemani, Vedika
The interplay of disorder and interactions can have remarkable effects on the physics of quantum systems. A striking example is provided by the long conjectured--and recently confirmed--phenomenon of many-body localization. Many-body localized (MBL) phases violate foundational assumptions about ergodicity and thermalization in interacting systems, and represent a new frontier for non-equilibrium quantum statistical mechanics. We start with a study of the dynamical response of MBL phases to time-dependent perturbations. We find that that an asymptotically slow, local perturbation induces a highly non-local response, a surprising result for a localized insulator. A complementary calculation in the linear-response regime elucidates the structure of many-body resonances contributing to the dynamics of this phase. We then turn to a study of quantum order in MBL systems. It was shown that localization can allow novel high-temperature phases and phase transitions that are disallowed in equilibrium. We extend this idea of "localization protected order'' to the case of symmetry-protected topological phases and to the elucidation of phase structure in periodically driven Floquet systems. We show that Floquet systems can display nontrivial phases, some of which show a novel form of correlated spatiotemporal order and are absolutely stable to all generic perturbations. The next part of the thesis addresses the role of quantum entanglement, broadly speaking. Remarkably, it was shown that even highly-excited MBL eigenstates have low area-law entanglement. We exploit this feature to develop tensor-network based algorithms for efficiently computing and representing highly-excited MBL eigenstates. We then switch gears from disordered, localized systems and examine the entanglement Hamiltonian and its low energy spectrum from a statistical mechanical lens, particularly focusing on issues of universality and thermalization. We close with two miscellaneous results on topologically ordered phases. The first studies the nonequilibrium "Kibble-Zurek'' dynamics resulting from driving a system through a phase transition from a topologically ordered phase to a trivial one at a finite rate. The second shows that the four-state Potts model on the pyrochlore lattice exhibits a "Coulomb Phase'' characterized by three emergent gauge fields.
DOT National Transportation Integrated Search
1998-01-01
These statistics are broken down for each country into four sets of tables: I. State of the orderbook, II. Ships completed, III. New orders, and IV. Specifications in compensation tonnage. Statistics for the United States and the United Kingdom can b...
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
World Refugee Crisis: Winning the Game. Facts for Action #6.
ERIC Educational Resources Information Center
Oxfam America, Boston, MA.
Definitions, statistics, and problems of world refugees are presented in this document for high school global education classes. Although various agencies have determined different definitions of the term, the authors consider as refugees all those forced to flee their native land in order to survive. For most refugees the attraction of a higher…
Low statistical power in biomedical science: a review of three human research domains.
Dumas-Mallet, Estelle; Button, Katherine S; Boraud, Thomas; Gonon, Francois; Munafò, Marcus R
2017-02-01
Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0-10% or 11-20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.
Low statistical power in biomedical science: a review of three human research domains
Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois
2017-01-01
Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409
First principles statistical mechanics of alloys and magnetism
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai
Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.
An Analysis of High School Students' Performance on Five Integrated Science Process Skills
NASA Astrophysics Data System (ADS)
Beaumont-Walters, Yvonne; Soyibo, Kola
2001-02-01
This study determined Jamaican high school students' level of performance on five integrated science process skills and if there were statistically significant differences in their performance linked to their gender, grade level, school location, school type, student type and socio-economic background (SEB). The 305 subjects comprised 133 males, 172 females, 146 ninth graders, 159 10th graders, 150 traditional and 155 comprehensive high school students, 164 students from the Reform of Secondary Education (ROSE) project and 141 non-ROSE students, 166 urban and 139 rural students and 110 students from a high SEB and 195 from a low SEB. Data were collected with the authors' constructed integrated science process skills test the results indicated that the subjects' mean score was low and unsatisfactory; their performance in decreasing order was: interpreting data, recording data, generalising, formulating hypotheses and identifying variables; there were statistically significant differences in their performance based on their grade level, school type, student type, and SEB in favour of the 10th graders, traditional high school students, ROSE students and students from a high SEB. There was a positive, statistically significant and fairly strong relationship between their performance and school type, but weak relationships among their student type, grade level and SEB and performance.
Kooperman, Gabriel J.; Pritchard, Michael S.; Burt, Melissa A.; ...
2016-09-26
Changes in the character of rainfall are assessed using a holistic set of statistics based on rainfall frequency and amount distributions in climate change experiments with three conventional and superparameterized versions of the Community Atmosphere Model (CAM and SPCAM). Previous work has shown that high-order statistics of present-day rainfall intensity are significantly improved with superparameterization, especially in regions of tropical convection. Globally, the two modeling approaches project a similar future increase in mean rainfall, especially across the Inter-Tropical Convergence Zone (ITCZ) and at high latitudes, but over land, SPCAM predicts a smaller mean change than CAM. Changes in high-order statisticsmore » are similar at high latitudes in the two models but diverge at lower latitudes. In the tropics, SPCAM projects a large intensification of moderate and extreme rain rates in regions of organized convection associated with the Madden Julian Oscillation, ITCZ, monsoons, and tropical waves. In contrast, this signal is missing in all versions of CAM, which are found to be prone to predicting increases in the amount but not intensity of moderate rates. Predictions from SPCAM exhibit a scale-insensitive behavior with little dependence on horizontal resolution for extreme rates, while lower resolution (~2°) versions of CAM are not able to capture the response simulated with higher resolution (~1°). Furthermore, moderate rain rates analyzed by the “amount mode” and “amount median” are found to be especially telling as a diagnostic for evaluating climate model performance and tracing future changes in rainfall statistics to tropical wave modes in SPCAM.« less
Ventral and dorsal streams for choosing word order during sentence production
Thothathiri, Malathi; Rattinger, Michelle
2015-01-01
Proficient language use requires speakers to vary word order and choose between different ways of expressing the same meaning. Prior statistical associations between individual verbs and different word orders are known to influence speakers’ choices, but the underlying neural mechanisms are unknown. Here we show that distinct neural pathways are used for verbs with different statistical associations. We manipulated statistical experience by training participants in a language containing novel verbs and two alternative word orders (agent-before-patient, AP; patient-before-agent, PA). Some verbs appeared exclusively in AP, others exclusively in PA, and yet others in both orders. Subsequently, we used sparse sampling neuroimaging to examine the neural substrates as participants generated new sentences in the scanner. Behaviorally, participants showed an overall preference for AP order, but also increased PA order for verbs experienced in that order, reflecting statistical learning. Functional activation and connectivity analyses revealed distinct networks underlying the increased PA production. Verbs experienced in both orders during training preferentially recruited a ventral stream, indicating the use of conceptual processing for mapping meaning to word order. In contrast, verbs experienced solely in PA order recruited dorsal pathways, indicating the use of selective attention and sensorimotor integration for choosing words in the right order. These results show that the brain tracks the structural associations of individual verbs and that the same structural output may be achieved via ventral or dorsal streams, depending on the type of regularities in the input. PMID:26621706
Effective potentials in nonlinear polycrystals and quadrature formulae
NASA Astrophysics Data System (ADS)
Michel, Jean-Claude; Suquet, Pierre
2017-08-01
This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471, 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.
Effective potentials in nonlinear polycrystals and quadrature formulae.
Michel, Jean-Claude; Suquet, Pierre
2017-08-01
This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471 , 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, Alex H.; Betcke, Timo; School of Mathematics, University of Manchester, Manchester, M13 9PL
2007-12-15
We report the first large-scale statistical study of very high-lying eigenmodes (quantum states) of the mushroom billiard proposed by L. A. Bunimovich [Chaos 11, 802 (2001)]. The phase space of this mixed system is unusual in that it has a single regular region and a single chaotic region, and no KAM hierarchy. We verify Percival's conjecture to high accuracy (1.7%). We propose a model for dynamical tunneling and show that it predicts well the chaotic components of predominantly regular modes. Our model explains our observed density of such superpositions dying as E{sup -1/3} (E is the eigenvalue). We compare eigenvaluemore » spacing distributions against Random Matrix Theory expectations, using 16 000 odd modes (an order of magnitude more than any existing study). We outline new variants of mesh-free boundary collocation methods which enable us to achieve high accuracy and high mode numbers ({approx}10{sup 5}) orders of magnitude faster than with competing methods.« less
A LES-based Eulerian-Lagrangian approach to predict the dynamics of bubble plumes
NASA Astrophysics Data System (ADS)
Fraga, Bruño; Stoesser, Thorsten; Lai, Chris C. K.; Socolofsky, Scott A.
2016-01-01
An approach for Eulerian-Lagrangian large-eddy simulation of bubble plume dynamics is presented and its performance evaluated. The main numerical novelties consist in defining the gas-liquid coupling based on the bubble size to mesh resolution ratio (Dp/Δx) and the interpolation between Eulerian and Lagrangian frameworks through the use of delta functions. The model's performance is thoroughly validated for a bubble plume in a cubic tank in initially quiescent water using experimental data obtained from high-resolution ADV and PIV measurements. The predicted time-averaged velocities and second-order statistics show good agreement with the measurements, including the reproduction of the anisotropic nature of the plume's turbulence. Further, the predicted Eulerian and Lagrangian velocity fields, second-order turbulence statistics and interfacial gas-liquid forces are quantified and discussed as well as the visualization of the time-averaged primary and secondary flow structure in the tank.
De Groote, Sandra L.; Blecic, Deborah D.; Martin, Kristin
2013-01-01
Objective: Libraries require efficient and reliable methods to assess journal use. Vendors provide complete counts of articles retrieved from their platforms. However, if a journal is available on multiple platforms, several sets of statistics must be merged. Link-resolver reports merge data from all platforms into one report but only record partial use because users can access library subscriptions from other paths. Citation data are limited to publication use. Vendor, link-resolver, and local citation data were examined to determine correlation. Because link-resolver statistics are easy to obtain, the study library especially wanted to know if they correlate highly with the other measures. Methods: Vendor, link-resolver, and local citation statistics for the study institution were gathered for health sciences journals. Spearman rank-order correlation coefficients were calculated. Results: There was a high positive correlation between all three data sets, with vendor data commonly showing the highest use. However, a small percentage of titles showed anomalous results. Discussion and Conclusions: Link-resolver data correlate well with vendor and citation data, but due to anomalies, low link-resolver data would best be used to suggest titles for further evaluation using vendor data. Citation data may not be needed as it correlates highly with other measures. PMID:23646026
[The development of hospital medical supplies information management system].
Cao, Shaoping; Gu, Hongqing; Zhang, Peng; Wang, Qiang
2010-05-01
The information management of medical materials by using high-tech computer, in order to improve the efficiency of the consumption of medical supplies, hospital supplies and develop a new technology way to manage the hospital and material support. Using C # NET, JAVA techniques to develop procedures for the establishment of hospital material management information system, set the various management modules, production of various statistical reports, standard operating procedures. The system is convenient, functional and strong, fluent statistical functions. It can always fully grasp and understand the whole hospital supplies run dynamic information, as a modern and effective tool for hospital materials management.
The Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqiu; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known Gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions (e.g., Miller 1994). Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz (1963) model as well as more realistic models of the oceans (Evensen and van Leeuwen 1996) and atmosphere (Houtekamer and Mitchell 1998). A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter equations to allow for correct update of the ensemble members (Burgers 1998). The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to quite puzzling in that results of state estimate are worse than for their filter analogue (Evensen 1997). In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use Lorenz (1963) model to test and compare the behavior of a variety implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
A stochastic fractional dynamics model of space-time variability of rain
NASA Astrophysics Data System (ADS)
Kundu, Prasun K.; Travis, James E.
2013-09-01
varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.
Image statistics underlying natural texture selectivity of neurons in macaque V4
Okazawa, Gouki; Tajima, Satohiro; Komatsu, Hidehiko
2015-01-01
Our daily visual experiences are inevitably linked to recognizing the rich variety of textures. However, how the brain encodes and differentiates a plethora of natural textures remains poorly understood. Here, we show that many neurons in macaque V4 selectively encode sparse combinations of higher-order image statistics to represent natural textures. We systematically explored neural selectivity in a high-dimensional texture space by combining texture synthesis and efficient-sampling techniques. This yielded parameterized models for individual texture-selective neurons. The models provided parsimonious but powerful predictors for each neuron’s preferred textures using a sparse combination of image statistics. As a whole population, the neuronal tuning was distributed in a way suitable for categorizing textures and quantitatively predicts human ability to discriminate textures. Together, we suggest that the collective representation of visual image statistics in V4 plays a key role in organizing the natural texture perception. PMID:25535362
Reducing high-order perineal laceration during operative vaginal delivery.
Hirsch, Emmet; Haney, Elaine I; Gordon, Trent E J; Silver, Richard K
2008-06-01
This study was undertaken to assess the impact of a focused intervention on reducing high-order (third and fourth degree) perineal lacerations during operative vaginal delivery. The following recommendations for clinical management were promulgated by departmental lectures, distribution of pertinent articles and manuals, training of physicians, and prominent display of an instructional poster: (1) increased utilization of vacuum extraction over forceps delivery; (2) conversion of occiput posterior to anterior positions before delivery; (3) performance of mediolateral episiotomy if episiotomy was deemed necessary; (4) flexion of the fetal head and maintenance of axis traction; (5) early disarticulation of forceps; and (6) reduced maternal effort at expulsion. Peer comparison was encouraged by provision of individual and departmental statistics. Clinical data were extracted from the labor and delivery database and the medical record. One hundred fifteen operative vaginal deliveries occurred in the 3 quarters preceding the intervention, compared with 100 afterward (P = .36). High-order laceration with operative vaginal delivery declined from 41% to 26% (P = .02), coincident with increased use of vacuum (16% vs 29% of operative vaginal deliveries, P = .02); fewer high-order lacerations after episiotomy (63% vs 22%, P = .003); a nonsignificant reduction in performance of episiotomy (30% vs 23%, P = .22); and a nonsignificant increase in mediolateral episiotomy (14% vs 30% of episiotomies, P = .19). Introduction of formal practice recommendations and performance review was associated with diminished high-order perineal injury with operative vaginal delivery.
ERIC Educational Resources Information Center
Barnes, Jessica J.; Woolrich, Mark W.; Baker, Kate; Colclough, Giles L.; Astle, Duncan E.
2016-01-01
Functional connectivity is the statistical association of neuronal activity time courses across distinct brain regions, supporting specific cognitive processes. This coordination of activity is likely to be highly important for complex aspects of cognition, such as the communication of fluctuating task goals from higher-order control regions to…
A High Precision Prediction Model Using Hybrid Grey Dynamic Model
ERIC Educational Resources Information Center
Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro
2008-01-01
In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…
Ten Years Later: Locating and Interviewing Children of Drug Abusers
ERIC Educational Resources Information Center
Haggerty, Kevin P.; Fleming, Charles B.; Catalano, Richard F.; Petrie, Renee S.; Rubin, Ronald J.; Grassley, Mary H.
2008-01-01
Longitudinal studies require high follow-up rates in order to maintain statistical power, reduce bias, and enhance the generalizability of results. This study reports on locating and survey completion for a 10-year follow-up of the Focus on Families project, an investigation of 130 families headed by parents who were enrolled in methadone…
ERIC Educational Resources Information Center
Koziol, Natalie A.; Bovaird, James A.
2018-01-01
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Factors contributing to academic achievement: a Bayesian structure equation modelling study
NASA Astrophysics Data System (ADS)
Payandeh Najafabadi, Amir T.; Omidi Najafabadi, Maryam; Farid-Rohani, Mohammad Reza
2013-06-01
In Iran, high school graduates enter university after taking a very difficult entrance exam called the Konkoor. Therefore, only the top-performing students are admitted by universities to continue their bachelor's education in statistics. Surprisingly, statistically, most of such students fall into the following categories: (1) do not succeed in their education despite their excellent performance on the Konkoor and in high school; (2) graduate with a grade point average (GPA) that is considerably lower than their high school GPA; (3) continue their master's education in majors other than statistics and (4) try to find jobs unrelated to statistics. This article employs the well-known and powerful statistical technique, the Bayesian structural equation modelling (SEM), to study the academic success of recent graduates who have studied statistics at Shahid Beheshti University in Iran. This research: (i) considered academic success as a latent variable, which was measured by GPA and other academic success (see below) of students in the target population; (ii) employed the Bayesian SEM, which works properly for small sample sizes and ordinal variables; (iii), which is taken from the literature, developed five main factors that affected academic success and (iv) considered several standard psychological tests and measured characteristics such as 'self-esteem' and 'anxiety'. We then study the impact of such factors on the academic success of the target population. Six factors that positively impact student academic success were identified in the following order of relative impact (from greatest to least): 'Teaching-Evaluation', 'Learner', 'Environment', 'Family', 'Curriculum' and 'Teaching Knowledge'. Particularly, influential variables within each factor have also been noted.
Impaired Statistical Learning in Developmental Dyslexia
Thiessen, Erik D.; Holt, Lori L.
2015-01-01
Purpose Developmental dyslexia (DD) is commonly thought to arise from phonological impairments. However, an emerging perspective is that a more general procedural learning deficit, not specific to phonological processing, may underlie DD. The current study examined if individuals with DD are capable of extracting statistical regularities across sequences of passively experienced speech and nonspeech sounds. Such statistical learning is believed to be domain-general, to draw upon procedural learning systems, and to relate to language outcomes. Method DD and control groups were familiarized with a continuous stream of syllables or sine-wave tones, the ordering of which was defined by high or low transitional probabilities across adjacent stimulus pairs. Participants subsequently judged two 3-stimulus test items with either high or low statistical coherence as being the most similar to the sounds heard during familiarization. Results As with control participants, the DD group was sensitive to the transitional probability structure of the familiarization materials as evidenced by above-chance performance. However, the performance of participants with DD was significantly poorer than controls across linguistic and nonlinguistic stimuli. In addition, reading-related measures were significantly correlated with statistical learning performance of both speech and nonspeech material. Conclusion Results are discussed in light of procedural learning impairments among participants with DD. PMID:25860795
Ma, Shengsheng; Zheng, Dongjian; Lin, Ling; Meng, Fanjian; Yuan, Yonggang
2015-03-01
To compare vision quality following phacoemulsification cataract extraction and implantation of a Big Bag or Akreos Adapt intraocular lens (IOL) in patients diagnosed with high myopia complicated with cataract. This was a randomized prospective control study. The patients with high myopia. complicated with cataract, with axial length ≥ 28 mm, and corneal astigmatism ≤ 1D were enrolled and randomly divided into the Big Bag and Akreos Adapt IOL groups. All patients underwent phacoemulsification cataract extraction and lens implantation. At 3 months after surgery, intraocular high-order aberration was measured by a Tracey-iTrace wavefront aberrometer at a pupil diameter of 5 mm in an absolutely dark room and statistically compared between two groups. The images of the anterior segment of eyes were photographed with a Scheimpflug camera using Penta-cam three-dimensional anterior segment analyzer. The tilt and decentration of the IOL were calculated by Image-pro plus 6.0 imaging analysis software and statistically compared between two groups. In total, 127 patients (127 eyes), including 52 males and 75 females, were enrolled in this study. The total high-order aberration and coma in the Akreos Adapt group (59 eyes) were significantly higher compared with those in the Big Bag (P < 0.05). The clover and spherical aberration did not differ between the two groups (P > 0.05). The horizontal and vertical decentration were significantly smaller in the Big Bag lens group than in the Akreos Adapt group (both P < 0.05), whereas the tilt of IOL did not significantly differ between the two groups (P > 0.05). Both Big Bag and Akreos Adapt IOLs possess relatively good intraocular stability implanted in patients with high myopia. Compared with the Akreos Adapt IOL, the Big Bag IOL presents with smaller intraocular high-order aberration. Coma is the major difference between the two groups.
Reliability of high-power QCW arrays
NASA Astrophysics Data System (ADS)
Feeler, Ryan; Junghans, Jeremy; Remley, Jennifer; Schnurbusch, Don; Stephens, Ed
2010-02-01
Northrop Grumman Cutting Edge Optronics has developed a family of arrays for high-power QCW operation. These arrays are built using CTE-matched heat sinks and hard solder in order to maximize the reliability of the devices. A summary of a recent life test is presented in order to quantify the reliability of QCW arrays and associated laser gain modules. A statistical analysis of the raw lifetime data is presented in order to quantify the data in such a way that is useful for laser system designers. The life tests demonstrate the high level of reliability of these arrays in a number of operating regimes. For single-bar arrays, a MTTF of 19.8 billion shots is predicted. For four-bar samples, a MTTF of 14.6 billion shots is predicted. In addition, data representing a large pump source is analyzed and shown to have an expected lifetime of 13.5 billion shots. This corresponds to an expected operational lifetime of greater than ten thousand hours at repetition rates less than 370 Hz.
Algorithm for computing descriptive statistics for very large data sets and the exa-scale era
NASA Astrophysics Data System (ADS)
Beekman, Izaak
2017-11-01
An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.
Capturing rogue waves by multi-point statistics
NASA Astrophysics Data System (ADS)
Hadjihosseini, A.; Wächter, Matthias; Hoffmann, N. P.; Peinke, J.
2016-01-01
As an example of a complex system with extreme events, we investigate ocean wave states exhibiting rogue waves. We present a statistical method of data analysis based on multi-point statistics which for the first time allows the grasping of extreme rogue wave events in a highly satisfactory statistical manner. The key to the success of the approach is mapping the complexity of multi-point data onto the statistics of hierarchically ordered height increments for different time scales, for which we can show that a stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. With this stochastic description surrogate data sets can in turn be generated, which makes it possible to work out arbitrary statistical features of the complex sea state in general, and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics.
NASA Astrophysics Data System (ADS)
De Michelis, Paola; Consolini, Giuseppe; Tozzi, Roberta; Marcucci, Maria Federica
2017-10-01
This paper attempts to explore the statistical scaling features of high-latitude geomagnetic field fluctuations at Swarm altitude. Data for this study are low-resolution (1 Hz) magnetic data recorded by the vector field magnetometer on board Swarm A satellite over 1 year (from 15 April 2014 to 15 April 2015). The first- and second-order structure function scaling exponents and the degree of intermittency of the fluctuations of the intensity of the horizontal component of the magnetic field at high northern latitudes have been evaluated for different interplanetary magnetic field orientations in the GSM Y-Z plane and seasons. In the case of the first-order structure function scaling exponent, a comparison between the average spatial distributions of the obtained values and the statistical convection patterns obtained using a Super Dual Auroral Radar Network dynamic model (CS10 model) has been also considered. The obtained results support the idea that the knowledge of the scaling features of the geomagnetic field fluctuations can help in the characterization of the different ionospheric turbulence regimes of the medium crossed by Swarm A satellite. This study shows that different turbulent regimes of the geomagnetic field fluctuations exist in the regions characterized by a double-cell convection pattern and in those regions near the border of the convective structures.
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
Statistical physics of multicomponent alloys using KKR-CPA
Khan, Suffian N.; Staunton, Julie B.; Stocks, George Malcolm
2016-02-16
We apply variational principles from statistical physics and the Landau theory of phase transitions to multicomponent alloys using the multiple-scattering theory of Korringa-Kohn-Rostoker (KKR) and the coherent potential approximation (CPA). This theory is a multicomponent generalization of the S( 2) theory of binary alloys developed by G. M. Stocks, J. B. Staunton, D. D. Johnson and others. It is highly relevant to the chemical phase stability of high-entropy alloys as it predicts the kind and size of finite-temperature chemical fluctuations. In doing so it includes effects of rearranging charge and other electronics due to changing site occupancies. When chemical fluctuationsmore » grow without bound an absolute instability occurs and a second-order order-disorder phase transition may be inferred. The S( 2) theory is predicated on the fluctuation-dissipation theorem; thus we derive the linear response of the CPA medium to perturbations in site-dependent chemical potentials in great detail. The theory lends itself to a natural interpretation in terms of competing effects: entropy driving disorder and favorable pair interactions driving atomic ordering. Moreover, to further clarify interpretation we present results for representative ternary alloys CuAgAu, NiPdPt, RhPdAg, and CoNiCu within a frozen charge (or band-only) approximation. These results include the so-called Onsager mean field correction that extends the temperature range for which the theory is valid.« less
Wiley, Jeffrey B.; Curran, Janet H.
2003-01-01
Methods for estimating daily mean flow-duration statistics for seven regions in Alaska and low-flow frequencies for one region, southeastern Alaska, were developed from daily mean discharges for streamflow-gaging stations in Alaska and conterminous basins in Canada. The 15-, 10-, 9-, 8-, 7-, 6-, 5-, 4-, 3-, 2-, and 1-percent duration flows were computed for the October-through-September water year for 222 stations in Alaska and conterminous basins in Canada. The 98-, 95-, 90-, 85-, 80-, 70-, 60-, and 50-percent duration flows were computed for the individual months of July, August, and September for 226 stations in Alaska and conterminous basins in Canada. The 98-, 95-, 90-, 85-, 80-, 70-, 60-, and 50-percent duration flows were computed for the season July-through-September for 65 stations in southeastern Alaska. The 7-day, 10-year and 7-day, 2-year low-flow frequencies for the season July-through-September were computed for 65 stations for most of southeastern Alaska. Low-flow analyses were limited to particular months or seasons in order to omit winter low flows, when ice effects reduce the quality of the records and validity of statistical assumptions. Regression equations for estimating the selected high-flow and low-flow statistics for the selected months and seasons for ungaged sites were developed from an ordinary-least-squares regression model using basin characteristics as independent variables. Drainage area and precipitation were significant explanatory variables for high flows, and drainage area, precipitation, mean basin elevation, and area of glaciers were significant explanatory variables for low flows. The estimating equations can be used at ungaged sites in Alaska and conterminous basins in Canada where streamflow regulation, streamflow diversion, urbanization, and natural damming and releasing of water do not affect the streamflow data for the given month or season. Standard errors of estimate ranged from 15 to 56 percent for high-duration flow statistics, 25 to greater than 500 percent for monthly low-duration flow statistics, 32 to 66 percent for seasonal low-duration flow statistics, and 53 to 64 percent for low-flow frequency statistics.
Helicity statistics in homogeneous and isotropic turbulence and turbulence models
NASA Astrophysics Data System (ADS)
Sahoo, Ganapati; De Pietro, Massimo; Biferale, Luca
2017-02-01
We study the statistical properties of helicity in direct numerical simulations of fully developed homogeneous and isotropic turbulence and in a class of turbulence shell models. We consider correlation functions based on combinations of vorticity and velocity increments that are not invariant under mirror symmetry. We also study the scaling properties of high-order structure functions based on the moments of the velocity increments projected on a subset of modes with either positive or negative helicity (chirality). We show that mirror symmetry is recovered at small scales, i.e., chiral terms are subleading and they are well captured by a dimensional argument plus anomalous corrections. These findings are also supported by a high Reynolds numbers study of helical shell models with the same chiral symmetry of Navier-Stokes equations.
Romanyk, Dan L; George, Andrew; Li, Yin; Heo, Giseon; Carey, Jason P; Major, Paul W
2016-05-01
To investigate the influence of a rotational second-order bracket-archwire misalignment on the loads generated during third-order torque procedures. Specifically, torque in the second- and third-order directions was considered. An orthodontic torque simulator (OTS) was used to simulate the third-order torque between Damon Q brackets and 0.019 × 0.025-inch stainless steel archwires. Second-order misalignments were introduced in 0.5° increments from a neutral position, 0.0°, up to 3.0° of misalignment. A sample size of 30 brackets was used for each misalignment. The archwire was then rotated in the OTS from its neutral position up to 30° in 3° increments and then unloaded in the same increments. At each position, all forces and torques were recorded. Repeated-measures analysis of variance was used to determine if the second-order misalignments significantly affected torque values in the second- and third-order directions. From statistical analysis of the experimental data, it was found that the only statistically significant differences in third-order torque between a misaligned state and the neutral position occurred for 2.5° and 3.0° of misalignment, with mean differences of 2.54 Nmm and 2.33 Nmm, respectively. In addition, in pairwise comparisons of second-order torque for each misalignment increment, statistical differences were observed in all comparisons except for 0.0° vs 0.5° and 1.5° vs 2.0°. The introduction of a second-order misalignment during third-order torque simulation resulted in statistically significant differences in both second- and third-order torque response; however, the former is arguably clinically insignificant.
Statistical mechanics in the context of special relativity.
Kaniadakis, G
2002-11-01
In Ref. [Physica A 296, 405 (2001)], starting from the one parameter deformation of the exponential function exp(kappa)(x)=(sqrt[1+kappa(2)x(2)]+kappax)(1/kappa), a statistical mechanics has been constructed which reduces to the ordinary Boltzmann-Gibbs statistical mechanics as the deformation parameter kappa approaches to zero. The distribution f=exp(kappa)(-beta E+betamu) obtained within this statistical mechanics shows a power law tail and depends on the nonspecified parameter beta, containing all the information about the temperature of the system. On the other hand, the entropic form S(kappa)= integral d(3)p(c(kappa) f(1+kappa)+c(-kappa) f(1-kappa)), which after maximization produces the distribution f and reduces to the standard Boltzmann-Shannon entropy S0 as kappa-->0, contains the coefficient c(kappa) whose expression involves, beside the Boltzmann constant, another nonspecified parameter alpha. In the present effort we show that S(kappa) is the unique existing entropy obtained by a continuous deformation of S0 and preserving unaltered its fundamental properties of concavity, additivity, and extensivity. These properties of S(kappa) permit to determine unequivocally the values of the above mentioned parameters beta and alpha. Subsequently, we explain the origin of the deformation mechanism introduced by kappa and show that this deformation emerges naturally within the Einstein special relativity. Furthermore, we extend the theory in order to treat statistical systems in a time dependent and relativistic context. Then, we show that it is possible to determine in a self consistent scheme within the special relativity the values of the free parameter kappa which results to depend on the light speed c and reduces to zero as c--> infinity recovering in this way the ordinary statistical mechanics and thermodynamics. The statistical mechanics here presented, does not contain free parameters, preserves unaltered the mathematical and epistemological structure of the ordinary statistical mechanics and is suitable to describe a very large class of experimentally observed phenomena in low and high energy physics and in natural, economic, and social sciences. Finally, in order to test the correctness and predictability of the theory, as working example we consider the cosmic rays spectrum, which spans 13 decades in energy and 33 decades in flux, finding a high quality agreement between our predictions and observed data.
A second-order Budkyo-type parameterization of landsurface hydrology
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1982-01-01
A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
The discrimination of sea ice types using SAR backscatter statistics
NASA Technical Reports Server (NTRS)
Shuchman, Robert A.; Wackerman, Christopher C.; Maffett, Andrew L.; Onstott, Robert G.; Sutherland, Laura L.
1989-01-01
X-band (HH) synthetic aperture radar (SAR) data of sea ice collected during the Marginal Ice Zone Experiment in March and April of 1987 was statistically analyzed with respect to discriminating open water, first-year ice, multiyear ice, and Odden. Odden are large expanses of nilas ice that rapidly form in the Greenland Sea and transform into pancake ice. A first-order statistical analysis indicated that mean versus variance can segment out open water and first-year ice, and skewness versus modified skewness can segment the Odden and multilayer categories. In additions to first-order statistics, a model has been generated for the distribution function of the SAR ice data. Segmentation of ice types was also attempted using textural measurements. In this case, the general co-occurency matrix was evaluated. The textural method did not generate better results than the first-order statistical approach.
Generalized statistical convergence of order β for sequences of fuzzy numbers
NASA Astrophysics Data System (ADS)
Altınok, Hıfsı; Karakaş, Abdulkadir; Altın, Yavuz
2018-01-01
In the present paper, we introduce the concepts of Δm-statistical convergence of order β for sequences of fuzzy numbers and strongly Δm-summable of order β for sequences of fuzzy numbers by using a modulus function f and taking supremum on metric d for 0 < β ≤ 1 and give some inclusion relations between them.
NASA Astrophysics Data System (ADS)
Matsubara, Takahiko
2003-02-01
We formulate a general method for perturbative evaluations of statistics of smoothed cosmic fields and provide useful formulae for application of the perturbation theory to various statistics. This formalism is an extensive generalization of the method used by Matsubara, who derived a weakly nonlinear formula of the genus statistic in a three-dimensional density field. After describing the general method, we apply the formalism to a series of statistics, including genus statistics, level-crossing statistics, Minkowski functionals, and a density extrema statistic, regardless of the dimensions in which each statistic is defined. The relation between the Minkowski functionals and other geometrical statistics is clarified. These statistics can be applied to several cosmic fields, including three-dimensional density field, three-dimensional velocity field, two-dimensional projected density field, and so forth. The results are detailed for second-order theory of the formalism. The effect of the bias is discussed. The statistics of smoothed cosmic fields as functions of rescaled threshold by volume fraction are discussed in the framework of second-order perturbation theory. In CDM-like models, their functional deviations from linear predictions plotted against the rescaled threshold are generally much smaller than that plotted against the direct threshold. There is still a slight meatball shift against rescaled threshold, which is characterized by asymmetry in depths of troughs in the genus curve. A theory-motivated asymmetry factor in the genus curve is proposed.
NASA Astrophysics Data System (ADS)
Mosby, Matthew; Matouš, Karel
2015-12-01
Three-dimensional simulations capable of resolving the large range of spatial scales, from the failure-zone thickness up to the size of the representative unit cell, in damage mechanics problems of particle reinforced adhesives are presented. We show that resolving this wide range of scales in complex three-dimensional heterogeneous morphologies is essential in order to apprehend fracture characteristics, such as strength, fracture toughness and shape of the softening profile. Moreover, we show that computations that resolve essential physical length scales capture the particle size-effect in fracture toughness, for example. In the vein of image-based computational materials science, we construct statistically optimal unit cells containing hundreds to thousands of particles. We show that these statistically representative unit cells are capable of capturing the first- and second-order probability functions of a given data-source with better accuracy than traditional inclusion packing techniques. In order to accomplish these large computations, we use a parallel multiscale cohesive formulation and extend it to finite strains including damage mechanics. The high-performance parallel computational framework is executed on up to 1024 processing cores. A mesh convergence and a representative unit cell study are performed. Quantifying the complex damage patterns in simulations consisting of tens of millions of computational cells and millions of highly nonlinear equations requires data-mining the parallel simulations, and we propose two damage metrics to quantify the damage patterns. A detailed study of volume fraction and filler size on the macroscopic traction-separation response of heterogeneous adhesives is presented.
A note on generalized Genome Scan Meta-Analysis statistics
Koziol, James A; Feng, Anne C
2005-01-01
Background Wise et al. introduced a rank-based statistical technique for meta-analysis of genome scans, the Genome Scan Meta-Analysis (GSMA) method. Levinson et al. recently described two generalizations of the GSMA statistic: (i) a weighted version of the GSMA statistic, so that different studies could be ascribed different weights for analysis; and (ii) an order statistic approach, reflecting the fact that a GSMA statistic can be computed for each chromosomal region or bin width across the various genome scan studies. Results We provide an Edgeworth approximation to the null distribution of the weighted GSMA statistic, and, we examine the limiting distribution of the GSMA statistics under the order statistic formulation, and quantify the relevance of the pairwise correlations of the GSMA statistics across different bins on this limiting distribution. We also remark on aggregate criteria and multiple testing for determining significance of GSMA results. Conclusion Theoretical considerations detailed herein can lead to clarification and simplification of testing criteria for generalizations of the GSMA statistic. PMID:15717930
Bettenbühl, Mario; Rusconi, Marco; Engbert, Ralf; Holschneider, Matthias
2012-01-01
Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.
Ivković, Miloš; Kuceyeski, Amy; Raj, Ashish
2012-01-01
Whole brain weighted connectivity networks were extracted from high resolution diffusion MRI data of 14 healthy volunteers. A statistically robust technique was proposed for the removal of questionable connections. Unlike most previous studies our methods are completely adapted for networks with arbitrary weights. Conventional statistics of these weighted networks were computed and found to be comparable to existing reports. After a robust fitting procedure using multiple parametric distributions it was found that the weighted node degree of our networks is best described by the normal distribution, in contrast to previous reports which have proposed heavy tailed distributions. We show that post-processing of the connectivity weights, such as thresholding, can influence the weighted degree asymptotics. The clustering coefficients were found to be distributed either as gamma or power-law distribution, depending on the formula used. We proposed a new hierarchical graph clustering approach, which revealed that the brain network is divided into a regular base-2 hierarchical tree. Connections within and across this hierarchy were found to be uncommonly ordered. The combined weight of our results supports a hierarchically ordered view of the brain, whose connections have heavy tails, but whose weighted node degrees are comparable. PMID:22761649
Ivković, Miloš; Kuceyeski, Amy; Raj, Ashish
2012-01-01
Whole brain weighted connectivity networks were extracted from high resolution diffusion MRI data of 14 healthy volunteers. A statistically robust technique was proposed for the removal of questionable connections. Unlike most previous studies our methods are completely adapted for networks with arbitrary weights. Conventional statistics of these weighted networks were computed and found to be comparable to existing reports. After a robust fitting procedure using multiple parametric distributions it was found that the weighted node degree of our networks is best described by the normal distribution, in contrast to previous reports which have proposed heavy tailed distributions. We show that post-processing of the connectivity weights, such as thresholding, can influence the weighted degree asymptotics. The clustering coefficients were found to be distributed either as gamma or power-law distribution, depending on the formula used. We proposed a new hierarchical graph clustering approach, which revealed that the brain network is divided into a regular base-2 hierarchical tree. Connections within and across this hierarchy were found to be uncommonly ordered. The combined weight of our results supports a hierarchically ordered view of the brain, whose connections have heavy tails, but whose weighted node degrees are comparable.
Comparative analysis of positive and negative attitudes toward statistics
NASA Astrophysics Data System (ADS)
Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah
2015-02-01
Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.
Fractional viscoelasticity of soft elastomers and auxetic foams
NASA Astrophysics Data System (ADS)
Solheim, Hannah; Stanisauskis, Eugenia; Miles, Paul; Oates, William
2018-03-01
Dielectric elastomers are commonly implemented in adaptive structures due to their unique capabilities for real time control of a structure's shape, stiffness, and damping. These active polymers are often used in applications where actuator control or dynamic tunability are important, making an accurate understanding of the viscoelastic behavior critical. This challenge is complicated as these elastomers often operate over a broad range of deformation rates. Whereas research has demonstrated success in applying a nonlinear viscoelastic constitutive model to characterize the behavior of Very High Bond (VHB) 4910, robust predictions of the viscoelastic response over the entire range of time scales is still a significant challenge. An alternative formulation for viscoelastic modeling using fractional order calculus has shown significant improvement in predictive capabilities. While fractional calculus has been explored theoretically in the field of linear viscoelasticity, limited experimental validation and statistical evaluation of the underlying phenomena have been considered. In the present study, predictions across several orders of magnitude in deformation rates are validated against data using a single set of model parameters. Moreover, we illustrate the fractional order is material dependent by running complementary experiments and parameter estimation on the elastomer VHB 4949 as well as an auxetic foam. All results are statistically validated using Bayesian uncertainty methods to obtain posterior densities for the fractional order as well as the hyperelastic parameters.
NASA Astrophysics Data System (ADS)
Lopez, S. R.; Hogue, T. S.
2011-12-01
Global climate models (GCMs) are primarily used to generate historical and future large-scale circulation patterns at a coarse resolution (typical order of 50,000 km2) and fail to capture climate variability at the ground level due to localized surface influences (i.e topography, marine, layer, land cover, etc). Their inability to accurately resolve these processes has led to the development of numerous 'downscaling' techniques. The goal of this study is to enhance statistical downscaling of daily precipitation and temperature for regions with heterogeneous land cover and topography. Our analysis was divided into two periods, historical (1961-2000) and contemporary (1980-2000), and tested using sixteen predictand combinations from four GCMs (GFDL CM2.0, GFDL CM2.1, CNRM-CM3 and MRI-CGCM2 3.2a. The Southern California area was separated into five county regions: Santa Barbara, Ventura, Los Angeles, Orange and San Diego. Principle component analysis (PCA) was performed on ground-based observations in order to (1) reduce the number of redundant gauges and minimize dimensionality and (2) cluster gauges that behave statistically similarly for post-analysis. Post-PCA analysis included extensive testing of predictor-predictand relationships using an enhanced canonical correlation analysis (ECCA). The ECCA includes obtaining the optimal predictand sets for all models within each spatial domain (county) as governed by daily and monthly overall statistics. Results show all models maintain mean annual and monthly behavior within each county and daily statistics are improved. The level of improvement highly depends on the vegetation extent within each county and the land-to-ocean ratio within the GCM spatial grid. The utilization of the entire historical period also leads to better statistical representation of observed daily precipitation. The validated ECCA technique is being applied to future climate scenarios distributed by the IPCC in order to provide forcing data for regional hydrologic models and assess future water resources in the Southern California region.
An order statistics approach to the halo model for galaxies
NASA Astrophysics Data System (ADS)
Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.
2017-04-01
We use the halo model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the 'central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the lognormal distribution around this mean and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering; however, this model predicts no luminosity dependence of large-scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically underpredicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the halo model for galaxies with more physically motivated galaxy formation models.
Arismendi, Ivan; Johnson, Sherri L.; Dunham, Jason B.
2015-01-01
Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical moments (skewness and kurtosis) to examine potential changes of empirical distributions at decadal extents. Second, we adapt a statistical procedure combining a non-metric multidimensional scaling technique and higher density region plots to detect potentially anomalous years. We illustrate the use of these approaches by examining long-term stream temperature data from minimally and highly human-influenced streams. In particular, we contrast predictions about thermal regime responses to changing climates and human-related water uses. Using these methods, we effectively diagnose years with unusual thermal variability and patterns in variability through time, as well as spatial variability linked to regional and local factors that influence stream temperature. Our findings highlight the complexity of responses of thermal regimes of streams and reveal their differential vulnerability to climate warming and human-related water uses. The two approaches presented here can be applied with a variety of other continuous phenomena to address historical changes, extreme events, and their associated ecological responses.
Sensation seeking and smoking behaviors among adolescents in the Republic of Korea.
Hwang, Heejin; Park, Sunhee
2015-06-01
This study aimed to explore the relationship between the four components of sensation seeking (i.e., disinhibition, thrill and adventure seeking, experience seeking, and boredom susceptibility) and three types of smoking behavior (i.e., non-smoking, experimental smoking, and current smoking) among high school students in the Republic of Korea. Multivariate multinomial logistic regression analysis was performed using two models. In Model 1, the four subscales of sensation seeking were used as covariates, and in Model 2, other control factors (i.e., characteristics related to demographics, individuals, family, school, and friends) were added to Model 1 in order to adjust for their effects. In Model 1, the impact of disinhibition on experimental smoking and current smoking was statistically significant. In Model 2, the influence of disinhibition on both of these smoking behaviors remained statistically significant after controlling for all the other covariates. Also, the effect of thrill and adventure seeking on experimental smoking was statistically significant. The two statistically significant subscales of sensation seeking were positively associated with the risk of smoking behaviors. According to extant literature and current research, sensation seeking, particularly disinhibition, is strongly associated with smoking among youth. Therefore, sensation seeking should be measured among adolescents to identify those who are at greater risk of smoking and to develop more effective intervention strategies in order to curb the smoking epidemic among youth. Copyright © 2015 Elsevier Ltd. All rights reserved.
I. Arismendi; S. L. Johnson; J. B. Dunham
2015-01-01
Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical...
Biological conservation law as an emerging functionality in dynamical neuronal networks.
Podobnik, Boris; Jusup, Marko; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M; Stanley, H Eugene
2017-11-07
Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law-the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective.
Biological conservation law as an emerging functionality in dynamical neuronal networks
Podobnik, Boris; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M.
2017-01-01
Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law—the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective. PMID:29078286
Kretz, Florian T A; Tandogan, Tamer; Khoramnia, Ramin; Auffarth, Gerd U
2015-01-01
AIM To evaluate the quality of vision in respect to high order aberrations and straylight perception after implantation of an aspheric, aberration correcting, monofocal intraocular lens (IOL). METHODS Twenty-one patients (34 eyes) aged 50 to 83y underwent cataract surgery with implantation of an aspheric, aberration correcting IOL (Tecnis ZCB00, Abbott Medical Optics). Three months after surgery they were examined for uncorrected (UDVA) and corrected distance visual acuity (CDVA), contrast sensitivity (CS) under photopic and mesopic conditions with and without glare source, ocular high order aberrations (HOA, Zywave II) and retinal straylight (C-Quant). RESULTS Postoperatively, patients achieved a postoperative CDVA of 0.0 logMAR or better in 97.1% of eyes. Mean values of high order abberations were +0.02±0.27 (primary coma components) and -0.04±0.16 (spherical aberration term). Straylight values of the C-Quant were 1.35±0.44 log which is within normal range of age matched phakic patients. The CS measurements under mesopic and photopic conditions in combination with and without glare did not show any statistical significance in the patient group observed (P≥0.28). CONCLUSION The implantation of an aspherical aberration correcting monofocal IOL after cataract surgery resulted in very low residual higher order aberration (HOA) and normal straylight. PMID:26309872
Applications of physical methods in high-frequency futures markets
NASA Astrophysics Data System (ADS)
Bartolozzi, M.; Mellen, C.; Chan, F.; Oliver, D.; Di Matteo, T.; Aste, T.
2007-12-01
In the present work we demonstrate the application of different physical methods to high-frequency or tick-bytick financial time series data. In particular, we calculate the Hurst exponent and inverse statistics for the price time series taken from a range of futures indices. Additionally, we show that in a limit order book the relaxation times of an imbalanced book state with more demand or supply can be described by stretched exponential laws analogous to those seen in many physical systems.
Spring-Pearson, Senanu M; Stone, Joshua K; Doyle, Adina; Allender, Christopher J; Okinaka, Richard T; Mayo, Mark; Broomall, Stacey M; Hill, Jessica M; Karavis, Mark A; Hubbard, Kyle S; Insalaco, Joseph M; McNew, Lauren A; Rosenzweig, C Nicole; Gibbons, Henry S; Currie, Bart J; Wagner, David M; Keim, Paul; Tuanyok, Apichai
2015-01-01
The pangenomic diversity in Burkholderia pseudomallei is high, with approximately 5.8% of the genome consisting of genomic islands. Genomic islands are known hotspots for recombination driven primarily by site-specific recombination associated with tRNAs. However, recombination rates in other portions of the genome are also high, a feature we expected to disrupt gene order. We analyzed the pangenome of 37 isolates of B. pseudomallei and demonstrate that the pangenome is 'open', with approximately 136 new genes identified with each new genome sequenced, and that the global core genome consists of 4568±16 homologs. Genes associated with metabolism were statistically overrepresented in the core genome, and genes associated with mobile elements, disease, and motility were primarily associated with accessory portions of the pangenome. The frequency distribution of genes present in between 1 and 37 of the genomes analyzed matches well with a model of genome evolution in which 96% of the genome has very low recombination rates but 4% of the genome recombines readily. Using homologous genes among pairs of genomes, we found that gene order was highly conserved among strains, despite the high recombination rates previously observed. High rates of gene transfer and recombination are incompatible with retaining gene order unless these processes are either highly localized to specific sites within the genome, or are characterized by symmetrical gene gain and loss. Our results demonstrate that both processes occur: localized recombination introduces many new genes at relatively few sites, and recombination throughout the genome generates the novel multi-locus sequence types previously observed while preserving gene order.
Liu, Ke; Nissinen, Jaakko; Slager, Robert -Jan; ...
2016-10-31
Here, the physics of nematic liquid crystals has been the subject of intensive research since the late 19th century. However, the focus of this pursuit has been centered around uniaxial and biaxial nematics associated with constituents bearing a D ∞h or D 2h symmetry, respectively. In view of general symmetries, however, these are singularly special since nematic order can in principle involve any point-group symmetry. Given the progress in tailoring nanoparticles with particular shapes and interactions, this vast family of “generalized nematics” might become accessible in the laboratory. Little is known because the order parameter theories associated with the highlymore » symmetric point groups are remarkably complicated, involving tensor order parameters of high rank. Here, we show that the generic features of the statistical physics of such systems can be studied in a highly flexible and efficient fashion using a mathematical tool borrowed from high-energy physics: discrete non-Abelian gauge theory. Explicitly, we construct a family of lattice gauge models encapsulating nematic ordering of general three-dimensional point-group symmetries. We find that the most symmetrical generalized nematics are subjected to thermal fluctuations of unprecedented severity. As a result, novel forms of fluctuation phenomena become possible. In particular, we demonstrate that a vestigial phase carrying no more than chiral order becomes ubiquitous departing from high point-group symmetry chiral building blocks, such as I, O, and T symmetric matter.« less
Mastropasqua, L; Toto, L; Zuppardi, E; Nubile, M; Carpineto, P; Di Nicola, M; Ballone, E
2006-01-01
To evaluate the refractive and aberrometric outcome of wavefront-guided photorefractive keratectomy (PRK) compared to standard PRK in myopic patients. Fifty-six eyes of 56 patients were included in the study and were randomly divided into two groups. The study group consisted of 28 eyes with a mean spherical equivalent (SE) of -2.25+/-0.76 diopters (D) (range: -1.5 to -3.5 D) treated with wavefront-guided PRK using the Zywave ablation profile and the Bausch & Lomb Technolas 217z excimer laser (Zyoptix system) and the control group included 28 eyes with a SE of -2.35+/-1.01 D (range: -1.5 to -3.5 D) treated with standard PRK (PlanoScan ablation) using the same laser. A Zywave aberrometer was used to analyze and calculate the root-mean-square (RMS) of total high order aberrations (HOA) and Zernike coefficients of third and fourth order before and after (over a 6-month follow-up period) surgery in both groups. Preoperative and postoperative SE, un-corrected visual acuity (UCVA), and best-corrected visual acuity (BCVA) were evaluated in all cases. There was a high correlation between achieved and intended correction. The differences between the two treatment groups were not statistically significant for UCVA, BCVA, or SE cycloplegic refraction . Postoperatively the RMS value of high order aberrations was raised in both groups. At 6-month control, on average it increased by a factor of 1.17 in the Zyoptix PRK group and 1.54 in the PlanoScan PRK group (p=0.22). In the Zyoptix group there was a decrease of coma aberration, while in the PlanoScan group this third order aberration increased. The difference between postoperative and preoperative values between the two groups was statistically significant for coma aberration (p=0.013). No statistically significant difference was observed for spherical-like aberration between the two groups. In the study group eyes with a low amount of preoperative aberrations (HOA RMS lower than the median value; <0.28 microm) showed an increase of HOA RMS while eyes with RMS higher than 0.28 microm showed a decrease (p<0.05). Zyoptix wavefront-guided PRK is as safe and efficacious for the correction of myopia and myopic astigmatism as PlanoScan PRK. Moreover this technique induces a smaller increase of third order coma aberration compared to standard PRK. The use of Zyoptix wavefront-guided PRK is particularly indicated in eyes with higher preoperative RMS values.
Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin
2015-11-01
In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Castro Sanchez, Amparo Yovanna; Aerts, Marc; Shkedy, Ziv; Vickerman, Peter; Faggiano, Fabrizio; Salamina, Guiseppe; Hens, Niel
2013-03-01
The hepatitis C virus (HCV) and the human immunodeficiency virus (HIV) are a clear threat for public health, with high prevalences especially in high risk groups such as injecting drug users. People with HIV infection who are also infected by HCV suffer from a more rapid progression to HCV-related liver disease and have an increased risk for cirrhosis and liver cancer. Quantifying the impact of HIV and HCV co-infection is therefore of great importance. We propose a new joint mathematical model accounting for co-infection with the two viruses in the context of injecting drug users (IDUs). Statistical concepts and methods are used to assess the model from a statistical perspective, in order to get further insights in: (i) the comparison and selection of optional model components, (ii) the unknown values of the numerous model parameters, (iii) the parameters to which the model is most 'sensitive' and (iv) the combinations or patterns of values in the high-dimensional parameter space which are most supported by the data. Data from a longitudinal study of heroin users in Italy are used to illustrate the application of the proposed joint model and its statistical assessment. The parameters associated with contact rates (sharing syringes) and the transmission rates per syringe-sharing event are shown to play a major role. Copyright © 2013 Elsevier B.V. All rights reserved.
Teaching Statistics in Integration with Psychology
ERIC Educational Resources Information Center
Wiberg, Marie
2009-01-01
The aim was to revise a statistics course in order to get the students motivated to learn statistics and to integrate statistics more throughout a psychology course. Further, we wish to make students become more interested in statistics and to help them see the importance of using statistics in psychology research. To achieve this goal, several…
Niche harmony search algorithm for detecting complex disease associated high-order SNP combinations.
Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; He, Zongzhen; Liu, Yajun; Liu, Zhaowen
2017-09-14
Genome-wide association study is especially challenging in detecting high-order disease-causing models due to model diversity, possible low or even no marginal effect of the model, and extraordinary search and computations. In this paper, we propose a niche harmony search algorithm where joint entropy is utilized as a heuristic factor to guide the search for low or no marginal effect model, and two computationally lightweight scores are selected to evaluate and adapt to diverse of disease models. In order to obtain all possible suspected pathogenic models, niche technique merges with HS, which serves as a taboo region to avoid HS trapping into local search. From the resultant set of candidate SNP-combinations, we use G-test statistic for testing true positives. Experiments were performed on twenty typical simulation datasets in which 12 models are with marginal effect and eight ones are with no marginal effect. Our results indicate that the proposed algorithm has very high detection power for searching suspected disease models in the first stage and it is superior to some typical existing approaches in both detection power and CPU runtime for all these datasets. Application to age-related macular degeneration (AMD) demonstrates our method is promising in detecting high-order disease-causing models.
Dynamo onset as a first-order transition: lessons from a shell model for magnetohydrodynamics.
Sahoo, Ganapati; Mitra, Dhrubaditya; Pandit, Rahul
2010-03-01
We carry out systematic and high-resolution studies of dynamo action in a shell model for magnetohydrodynamic (MHD) turbulence over wide ranges of the magnetic Prandtl number PrM and the magnetic Reynolds number ReM. Our study suggests that it is natural to think of dynamo onset as a nonequilibrium first-order phase transition between two different turbulent, but statistically steady, states. The ratio of the magnetic and kinetic energies is a convenient order parameter for this transition. By using this order parameter, we obtain the stability diagram (or nonequilibrium phase diagram) for dynamo formation in our MHD shell model in the (PrM-1,ReM) plane. The dynamo boundary, which separates dynamo and no-dynamo regions, appears to have a fractal character. We obtain a hysteretic behavior of the order parameter across this boundary and suggestions of nucleation-type phenomena.
The Attitude of Iranian Nurses About Do Not Resuscitate Orders
Mogadasian, Sima; Abdollahzadeh, Farahnaz; Rahmani, Azad; Ferguson, Caleb; Pakanzad, Fermisk; Pakpour, Vahid; Heidarzadeh, Hamid
2014-01-01
Background: Do not resuscitate (DNR) orders are one of many challenging issues in end of life care. Previous research has not investigated Muslim nurses’ attitudes towards DNR orders. Aims: This study aims to investigate the attitude of Iranian nurses towards DNR orders and determine the role of religious sects in forming attitudes. Materials and Methods: In this descriptive-comparative study, 306 nurses from five hospitals affiliated to Tabriz University of Medical Sciences (TUOMS) in East Azerbaijan Province and three hospitals in Kurdistan province participated. Data were gathered by a survey design on attitudes on DNR orders. Data were analyzed using Statistical Package for Social Sciences (SPSS Inc., Chicago, IL) software examining descriptive and inferential statistics. Results: Participants showed their willingness to learn more about DNR orders and highlights the importance of respecting patients and their families in DNR orders. In contrast, in many key items participants reported their negative attitude towards DNR orders. There were statistical differences in two items between the attitude of Shiite and Sunni nurses. Conclusions: Iranian nurses, regardless of their religious sects, reported negative attitude towards many aspects of DNR orders. It may be possible to change the attitude of Iranian nurses towards DNR through education. PMID:24600178
The attitude of Iranian nurses about do not resuscitate orders.
Mogadasian, Sima; Abdollahzadeh, Farahnaz; Rahmani, Azad; Ferguson, Caleb; Pakanzad, Fermisk; Pakpour, Vahid; Heidarzadeh, Hamid
2014-01-01
Do not resuscitate (DNR) orders are one of many challenging issues in end of life care. Previous research has not investigated Muslim nurses' attitudes towards DNR orders. This study aims to investigate the attitude of Iranian nurses towards DNR orders and determine the role of religious sects in forming attitudes. In this descriptive-comparative study, 306 nurses from five hospitals affiliated to Tabriz University of Medical Sciences (TUOMS) in East Azerbaijan Province and three hospitals in Kurdistan province participated. Data were gathered by a survey design on attitudes on DNR orders. Data were analyzed using Statistical Package for Social Sciences (SPSS Inc., Chicago, IL) software examining descriptive and inferential statistics. Participants showed their willingness to learn more about DNR orders and highlights the importance of respecting patients and their families in DNR orders. In contrast, in many key items participants reported their negative attitude towards DNR orders. There were statistical differences in two items between the attitude of Shiite and Sunni nurses. Iranian nurses, regardless of their religious sects, reported negative attitude towards many aspects of DNR orders. It may be possible to change the attitude of Iranian nurses towards DNR through education.
Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E; Chacron, Maurice J
2017-01-01
There is accumulating evidence that the brain's neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (< 2 Hz) and more sharply for high (>2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals.
Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
Characterization of binary string statistics for syntactic landmine detection
NASA Astrophysics Data System (ADS)
Nasif, Ahmed O.; Mark, Brian L.; Hintz, Kenneth J.
2011-06-01
Syntactic landmine detection has been proposed to detect and classify non-metallic landmines using ground penetrating radar (GPR). In this approach, the GPR return is processed to extract characteristic binary strings for landmine and clutter discrimination. In our previous work, we discussed the preprocessing methodology by which the amplitude information of the GPR A-scan signal can be effectively converted into binary strings, which identify the impedance discontinuities in the signal. In this work, we study the statistical properties of the binary string space. In particular, we develop a Markov chain model to characterize the observed bit sequence of the binary strings. The state is defined as the number of consecutive zeros between two ones in the binarized A-scans. Since the strings are highly sparse (the number of zeros is much greater than the number of ones), defining the state this way leads to fewer number of states compared to the case where each bit is defined as a state. The number of total states is further reduced by quantizing the number of consecutive zeros. In order to identify the correct order of the Markov model, the mean square difference (MSD) between the transition matrices of mine strings and non-mine strings is calculated up to order four using training data. The results show that order one or two maximizes this MSD. The specification of the transition probabilities of the chain can be used to compute the likelihood of any given string. Such a model can be used to identify characteristic landmine strings during the training phase. These developments on modeling and characterizing the string statistics can potentially be part of a real-time landmine detection algorithm that identifies landmine and clutter in an adaptive fashion.
Simulated performance of an order statistic threshold strategy for detection of narrowband signals
NASA Technical Reports Server (NTRS)
Satorius, E.; Brady, R.; Deich, W.; Gulkis, S.; Olsen, E.
1988-01-01
The application of order statistics to signal detection is becoming an increasingly active area of research. This is due to the inherent robustness of rank estimators in the presence of large outliers that would significantly degrade more conventional mean-level-based detection systems. A detection strategy is presented in which the threshold estimate is obtained using order statistics. The performance of this algorithm in the presence of simulated interference and broadband noise is evaluated. In this way, the robustness of the proposed strategy in the presence of the interference can be fully assessed as a function of the interference, noise, and detector parameters.
Estimating procedure times for surgeries by determining location parameters for the lognormal model.
Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H
2004-05-01
We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.
NASA Astrophysics Data System (ADS)
Kerr, Laura T.; Adams, Aine; O'Dea, Shirley; Domijan, Katarina; Cullen, Ivor; Hennelly, Bryan M.
2014-05-01
Raman microspectroscopy can be applied to the urinary bladder for highly accurate classification and diagnosis of bladder cancer. This technique can be applied in vitro to bladder epithelial cells obtained from urine cytology or in vivo as an optical biopsy" to provide results in real-time with higher sensitivity and specificity than current clinical methods. However, there exists a high degree of variability across experimental parameters which need to be standardised before this technique can be utilized in an everyday clinical environment. In this study, we investigate different laser wavelengths (473 nm and 532 nm), sample substrates (glass, fused silica and calcium fluoride) and multivariate statistical methods in order to gain insight into how these various experimental parameters impact on the sensitivity and specificity of Raman cytology.
Effects of heat-treatment and explosive brisance on fragmentation of high strength steel
NASA Astrophysics Data System (ADS)
Stolken, James; Kumar, Mukul; Gold, Vladimir; Baker, Ernest; Lawrence Livermore Nationa Laboratory Collaboration; Armament Research Development; Eng Collaboration
2011-06-01
Tubes of AISI-4340 steel were heat-treated to three distinct microstructures resulting in nominal hardness values of 25 Rc, 38 Rc and 48 Rc. The specimens were then explosively fragmented using TNT and PETN. The experiments were conducted in a contained firing facility with high fragment collection efficiency. Statistical analyses of recovered fragments were performed. Fragment rank-order statistics and generalized goodness-of-fit tests were used to characterize the fragment mass distributions. These analyses indicated significant interaction effects between the heat-treatment (and the resulting microstructure) and the explosive brisance. The role of the microstructure in relation to the yield-strength and toughness will also be discussed. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Influence of Computational Drop Representation in LES of a Droplet-Laden Mixing Layer
NASA Technical Reports Server (NTRS)
Bellan, Josette; Radhakrishnan, Senthilkumaran
2013-01-01
Multiphase turbulent flows are encountered in many practical applications including turbine engines or natural phenomena involving particle dispersion. Numerical computations of multiphase turbulent flows are important because they provide a cheaper alternative to performing experiments during an engine design process or because they can provide predictions of pollutant dispersion, etc. Two-phase flows contain millions and sometimes billions of particles. For flows with volumetrically dilute particle loading, the most accurate method of numerically simulating the flow is based on direct numerical simulation (DNS) of the governing equations in which all scales of the flow including the small scales that are responsible for the overwhelming amount of dissipation are resolved. DNS, however, requires high computational cost and cannot be used in engineering design applications where iterations among several design conditions are necessary. Because of high computational cost, numerical simulations of such flows cannot track all these drops. The objective of this work is to quantify the influence of the number of computational drops and grid spacing on the accuracy of predicted flow statistics, and to possibly identify the minimum number, or, if not possible, the optimal number of computational drops that provide minimal error in flow prediction. For this purpose, several Large Eddy Simulation (LES) of a mixing layer with evaporating drops have been performed by using coarse, medium, and fine grid spacings and computational drops, rather than physical drops. To define computational drops, an integer NR is introduced that represents the ratio of the number of existing physical drops to the desired number of computational drops; for example, if NR=8, this means that a computational drop represents 8 physical drops in the flow field. The desired number of computational drops is determined by the available computational resources; the larger NR is, the less computationally intensive is the simulation. A set of first order and second order flow statistics, and of drop statistics are extracted from LES predictions and are compared to results obtained by filtering a DNS database. First order statistics such as Favre averaged stream-wise velocity, Favre averaged vapor mass fraction, and the drop stream-wise velocity, are predicted accurately independent of the number of computational drops and grid spacing. Second order flow statistics depend both on the number of computational drops and on grid spacing. The scalar variance and turbulent vapor flux are predicted accurately by the fine mesh LES only when NR is less than 32, and by the coarse mesh LES reasonably accurately for all NR values. This is attributed to the fact that when the grid spacing is coarsened, the number of drops in a computational cell must not be significantly lower than that in the DNS.
Advanced statistical energy analysis
NASA Astrophysics Data System (ADS)
Heron, K. H.
1994-09-01
A high-frequency theory (advanced statistical energy analysis (ASEA)) is developed which takes account of the mechanism of tunnelling and uses a ray theory approach to track the power flowing around a plate or a beam network and then uses statistical energy analysis (SEA) to take care of any residual power. ASEA divides the energy of each sub-system into energy that is freely available for transfer to other sub-systems and energy that is fixed within the sub-systems that are physically separate and can be interpreted as a series of mathematical models, the first of which is identical to standard SEA and subsequent higher order models are convergent on an accurate prediction. Using a structural assembly of six rods as an example, ASEA is shown to converge onto the exact results while SEA is shown to overpredict by up to 60 dB.
Full counting statistics of conductance for disordered systems
NASA Astrophysics Data System (ADS)
Fu, Bin; Zhang, Lei; Wei, Yadong; Wang, Jian
2017-09-01
Quantum transport is a stochastic process in nature. As a result, the conductance is fully characterized by its average value and fluctuations, i.e., characterized by full counting statistics (FCS). Since disorders are inevitable in nanoelectronic devices, it is important to understand how FCS behaves in disordered systems. The traditional approach dealing with fluctuations or cumulants of conductance uses diagrammatic perturbation expansion of the Green's function within coherent potential approximation (CPA), which is extremely complicated especially for high order cumulants. In this paper, we develop a theoretical formalism based on nonequilibrium Green's function by directly taking the disorder average on the generating function of FCS of conductance within CPA. This is done by mapping the problem into higher dimensions so that the functional dependence of generating a function on the Green's function becomes linear and the diagrammatic perturbation expansion is not needed anymore. Our theory is very simple and allows us to calculate cumulants of conductance at any desired order efficiently. As an application of our theory, we calculate the cumulants of conductance up to fifth order for disordered systems in the presence of Anderson and binary disorders. Our numerical results of cumulants of conductance show remarkable agreement with that obtained by the brute force calculation.
Summary Statistics for Homemade ?Play Dough? -- Data Acquired at LLNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kallman, J S; Morales, K E; Whipple, R E
Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a homemade Play Dough{trademark}-like material, designated as PDA. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2700 LMHU{sub D} 100kVp to a low of about 1200 LMHUD at 300kVp. The standard deviation of each measurement is around 10% to 15% of the mean. The entropy covers the range from 6.0 to 7.4. Ordinarily, we would model the LAC of themore » material and compare the modeled values to the measured values. In this case, however, we did not have the detailed chemical composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 10. LLNL prepared about 50mL of the homemade 'Play Dough' in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less
NASA Astrophysics Data System (ADS)
Erfanifard, Y.; Rezayan, F.
2014-10-01
Vegetation heterogeneity biases second-order summary statistics, e.g., Ripley's K-function, applied for spatial pattern analysis in ecology. Second-order investigation based on Ripley's K-function and related statistics (i.e., L- and pair correlation function g) is widely used in ecology to develop hypothesis on underlying processes by characterizing spatial patterns of vegetation. The aim of this study was to demonstrate effects of underlying heterogeneity of wild pistachio (Pistacia atlantica Desf.) trees on the second-order summary statistics of point pattern analysis in a part of Zagros woodlands, Iran. The spatial distribution of 431 wild pistachio trees was accurately mapped in a 40 ha stand in the Wild Pistachio & Almond Research Site, Fars province, Iran. Three commonly used second-order summary statistics (i.e., K-, L-, and g-functions) were applied to analyse their spatial pattern. The two-sample Kolmogorov-Smirnov goodness-of-fit test showed that the observed pattern significantly followed an inhomogeneous Poisson process null model in the study region. The results also showed that heterogeneous pattern of wild pistachio trees biased the homogeneous form of K-, L-, and g-functions, demonstrating a stronger aggregation of the trees at the scales of 0-50 m than actually existed and an aggregation at scales of 150-200 m, while regularly distributed. Consequently, we showed that heterogeneity of point patterns may bias the results of homogeneous second-order summary statistics and we also suggested applying inhomogeneous summary statistics with related null models for spatial pattern analysis of heterogeneous vegetations.
Self-Other Differentiation and the Mother-Child Relationship: The Effects of Sex and Birth Order.
Olver, Rose R; Aries, Elizabeth; Batgos, Joanna
1989-09-01
We examined the effect of sex and birth order on the experience of a separate sense of self and its connection to mothers' differential involvement with sons and daughters. Results from two samples of college-age students are reported. Men showed a more separate sense of self than women, and mothers were reported to be more highly involved with and intrusive in the lives of their daughters than their sons. Results for birth order were consistent though not always statistically significant: First-born women had the least separate sense of self and reported the greatest degree of maternal involvement and intrusiveness. Implications of these findings for developmental theory and criteria of Psychological well-being are discussed.
NASA Astrophysics Data System (ADS)
Yarloo, H.; Langari, A.; Vaezi, A.
2018-02-01
We enquire into the quasi many-body localization in topologically ordered states of matter, revolving around the case of Kitaev toric code on the ladder geometry, where different types of anyonic defects carry different masses induced by environmental errors. Our study verifies that the presence of anyons generates a complex energy landscape solely through braiding statistics, which suffices to suppress the diffusion of defects in such clean, multicomponent anyonic liquid. This nonergodic dynamics suggests a promising scenario for investigation of quasi many-body localization. Computing standard diagnostics evidences that a typical initial inhomogeneity of anyons gives birth to a glassy dynamics with an exponentially diverging time scale of the full relaxation. Our results unveil how self-generated disorder ameliorates the vulnerability of topological order away from equilibrium. This setting provides a new platform which paves the way toward impeding logical errors by self-localization of anyons in a generic, high energy state, originated exclusively in their exotic statistics.
Quantifying memory in complex physiological time-series.
Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R
2013-01-01
In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.
Mahapatra, Dwarikanath; Schueffler, Peter; Tielbeek, Jeroen A W; Buhmann, Joachim M; Vos, Franciscus M
2013-10-01
Increasing incidence of Crohn's disease (CD) in the Western world has made its accurate diagnosis an important medical challenge. The current reference standard for diagnosis, colonoscopy, is time-consuming and invasive while magnetic resonance imaging (MRI) has emerged as the preferred noninvasive procedure over colonoscopy. Current MRI approaches assess rate of contrast enhancement and bowel wall thickness, and rely on extensive manual segmentation for accurate analysis. We propose a supervised learning method for the identification and localization of regions in abdominal magnetic resonance images that have been affected by CD. Low-level features like intensity and texture are used with shape asymmetry information to distinguish between diseased and normal regions. Particular emphasis is laid on a novel entropy-based shape asymmetry method and higher-order statistics like skewness and kurtosis. Multi-scale feature extraction renders the method robust. Experiments on real patient data show that our features achieve a high level of accuracy and perform better than two competing methods.
Quantifying Memory in Complex Physiological Time-Series
Shirazi, Amir H.; Raoufy, Mohammad R.; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R.; Amodio, Piero; Jafari, G. Reza; Montagnese, Sara; Mani, Ali R.
2013-01-01
In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of “memory length” was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are ‘forgotten’ quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations. PMID:24039811
Spatial interpolation of monthly mean air temperature data for Latvia
NASA Astrophysics Data System (ADS)
Aniskevich, Svetlana
2016-04-01
Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.
[Analysis of master degree thesis of otolaryngology head and neck surgery in Xinjiang].
Ayiheng, Qukuerhan; Niliapaer, Alimu; Yalikun, Yasheng
2010-12-01
To understand the basic situation and development of knowledge structure and ability of master degree of Otolaryngology Head and Neck Surgery in Xinjiang region in order to provide reference to further improve the quality of postgraduate students. Fourty-six papers of Otolaryngology master degree thesis were reviewed at randomly in terms of types, subject selection ranges as well as statistical methods during 1998-2009 in Xinjiang region in order to analyze and explore its advantages and characteristics and suggest a solution for its disadvantages. In 46 degree thesis, nine of them are scientific dissertations accounting for 19.57%, 37 are clinical professional degree thesis, accounting for 80.43%. Five are Experimental research papers, 30 are clinical research papers, 10 are clinical and experimental research papers, 1 of them is experimental epidemiology research paper; in this study, the kinds of diseases including every subject of ENT, various statistical methods are involved; references are 37.46 in average, 19.55 of them are foreign literatures references in nearly 5 years are 13.57; four ethnic groups are exist in postgraduate students with high teaching professional level of tutors. The clinical research should be focused in order to further research on ENT common diseases, the application of advanced research methods, the full application of the latest literature, tutors with high-level, training of students of various nationalities, basic research needs to be innovative and should be focus the subject characteristics, to avoid excessive duplication of research.
Akin, Cevat; Yi, Jingang; Feldman, Leonard C.; ...
2015-05-05
For nanowires of the same composition, and even fabricated within the same batch, often exhibit electrical conductivities that can vary by orders of magnitude. Unfortunately, existing electrical characterization methods are time-consuming, making the statistical survey of highly variable samples essentially impractical. Here, we demonstrate a contactless, solution-based method to efficiently measure the electrical conductivity of 1D nanomaterials based on their transient alignment behavior in ac electric fields of different frequencies. In comparison with direct transport measurements by probe-based scanning tunneling microscopy shows that electro-orientation spectroscopy can quantitatively measure nanowire conductivity over a 5-order-of-magnitude range, 10–5–1 Ω–1 m–1 (corresponding to resistivitiesmore » in the range 102–107 Ω·cm). With this method, we statistically characterize the conductivity of a variety of nanowires and find significant variability in silicon nanowires grown by metal-assisted chemical etching from the same wafer. We also find that the active carrier concentration of n-type silicon nanowires is greatly reduced by surface traps and that surface passivation increases the effective conductivity by an order of magnitude. Moreover, this simple method makes electrical characterization of insulating and semiconducting 1D nanomaterials far more efficient and accessible to more researchers than current approaches. Electro-orientation spectroscopy also has the potential to be integrated with other solution-based methods for the high-throughput sorting and manipulation of 1D nanomaterials for postgrowth device assembly.« less
Some limit theorems for ratios of order statistics from uniform random variables.
Xu, Shou-Fang; Miao, Yu
2017-01-01
In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.
ERIC Educational Resources Information Center
Zacharakis, Jeff; Wang, Haiyan; Patterson, Margaret Becker; Andersen, Lori
2015-01-01
This research analyzed linked high-quality state data from K-12, adult education, and postsecondary state datasets in order to better understand the association between student demographics and successful completion of a postsecondary program. Due to the relatively small sample size compared to the large number of features, we analyzed the data…
High quality GaAs single photon emitters on Si substrate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bietti, S.; Sanguinetti, S.; Cavigli, L.
2013-12-04
We describe a method for the direct epitaxial growth of a single photon emitter, based on GaAs quantum dots fabricated by droplet epitaxy, working at liquid nitrogen temperatures on Si substrates. The achievement of quantum photon statistics up to T=80 K is directly proved by antibunching in the second order correlation function as measured with a H anbury Brown and Twiss interferometer.
Parametric Study of Decay of Homogeneous Isotropic Turbulence Using Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Rumsey, Christopher L.; Rubinstein, Robert; Balakumar, Ponnampalam; Zang, Thomas A.
2012-01-01
Numerical simulations of decaying homogeneous isotropic turbulence are performed with both low-order and high-order spatial discretization schemes. The turbulent Mach and Reynolds numbers for the simulations are 0.2 and 250, respectively. For the low-order schemes we use either second-order central or third-order upwind biased differencing. For higher order approximations we apply weighted essentially non-oscillatory (WENO) schemes, both with linear and nonlinear weights. There are two objectives in this preliminary effort to investigate possible schemes for large eddy simulation (LES). One is to explore the capability of a widely used low-order computational fluid dynamics (CFD) code to perform LES computations. The other is to determine the effect of higher order accuracy (fifth, seventh, and ninth order) achieved with high-order upwind biased WENO-based schemes. Turbulence statistics, such as kinetic energy, dissipation, and skewness, along with the energy spectra from simulations of the decaying turbulence problem are used to assess and compare the various numerical schemes. In addition, results from the best performing schemes are compared with those from a spectral scheme. The effects of grid density, ranging from 32 cubed to 192 cubed, on the computations are also examined. The fifth-order WENO-based scheme is found to be too dissipative, especially on the coarser grids. However, with the seventh-order and ninth-order WENO-based schemes we observe a significant improvement in accuracy relative to the lower order LES schemes, as revealed by the computed peak in the energy dissipation and by the energy spectrum.
Teruel, Jose R; Goa, Pål E; Sjøbakk, Torill E; Østlie, Agnes; Fjøsne, Hans E; Bathen, Tone F
2016-05-01
To compare "standard" diffusion weighted imaging, and diffusion tensor imaging (DTI) of 2(nd) and 4(th) -order for the differentiation of malignant and benign breast lesions. Seventy-one patients were imaged at 3 Tesla with a 16-channel breast coil. A diffusion weighted MRI sequence including b = 0 and b = 700 in 30 directions was obtained for all patients. The image data were fitted to three different diffusion models: isotropic model - apparent diffusion coefficient (ADC), 2(nd) -order tensor model (the standard model used for DTI) and a 4(th) -order tensor model, with increased degrees of freedom to describe anisotropy. The ability of the fitted parameters in the different models to differentiate between malignant and benign tumors was analyzed. Seventy-two breast lesions were analyzed, out of which 38 corresponded to malignant and 34 to benign tumors. ADC (using any model) presented the highest discriminative ability of malignant from benign tumors with a receiver operating characteristic area under the curve (AUC) of 0.968, and sensitivity and specificity of 94.1% and 94.7% respectively for a 1.33 × 10(-3) mm(2) /s cutoff. Anisotropy measurements presented high statistical significance between malignant and benign tumors (P < 0.001), but with lower discriminative ability of malignant from benign tumors than ADC (AUC of 0.896 and 0.897 for fractional anisotropy and generalized anisotropy respectively). Statistical significant difference was found between generalized anisotropy and fractional anisotropy for cancers (P < 0.001) but not for benign lesions (P = 0.87). While anisotropy parameters have the potential to provide additional value for breast applications as demonstrated in this study, ADC exhibited the highest differentiation power between malignant and benign breast tumors. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Vermeire, B. C.; Witherden, F. D.; Vincent, P. E.
2017-04-01
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier-Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor-Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vermeire, B.C., E-mail: brian.vermeire@concordia.ca; Witherden, F.D.; Vincent, P.E.
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier–Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to amore » range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor–Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Effect of third-order aberrations on dynamic accommodation.
López-Gil, Norberto; Rucker, Frances J; Stark, Lawrence R; Badar, Mustanser; Borgovan, Theodore; Burke, Sean; Kruger, Philip B
2007-03-01
We investigate the potential for the third-order aberrations coma and trefoil to provide a signed cue to accommodation. It is first demonstrated theoretically (with some assumptions) that the point spread function is insensitive to the sign of spherical defocus in the presence of odd-order aberrations. In an experimental investigation, the accommodation response to a sinusoidal change in vergence (1-3D, 0.2Hz) of a monochromatic stimulus was obtained with a dynamic infrared optometer. Measurements were obtained in 10 young visually normal individuals with and without custom contact lenses that induced low and high values of r.m.s. trefoil (0.25, 1.03 microm) and coma (0.34, 0.94 microm). Despite variation between subjects, we did not find any statistically significant increase or decrease in the accommodative gain for low levels of trefoil and coma, although effects approached or reached significance for the high levels of trefoil and coma. Theoretical and experimental results indicate that the presence of Zernike third-order aberrations on the eye does not seem to play a crucial role in the dynamics of the accommodation response.
Divergence of activity expansions: Is it actually a problem?
NASA Astrophysics Data System (ADS)
Ushcats, M. V.; Bulavin, L. A.; Sysoev, V. M.; Ushcats, S. Yu.
2017-12-01
For realistic interaction models, which include both molecular attraction and repulsion (e.g., Lennard-Jones, modified Lennard-Jones, Morse, and square-well potentials), the asymptotic behavior of the virial expansions for pressure and density in powers of activity has been studied taking power terms of high orders into account on the basis of the known finite-order irreducible integrals as well as the recent approximations of infinite irreducible series. Even in the divergence region (at subcritical temperatures), this behavior stays thermodynamically adequate (in contrast to the behavior of the virial equation of state with the same set of irreducible integrals) and corresponds to the beginning of the first-order phase transition: the divergence yields the jump (discontinuity) in density at constant pressure and chemical potential. In general, it provides a statistical explanation of the condensation phenomenon, but for liquid or solid states, the physically proper description (which can turn the infinite discontinuity into a finite jump of density) still needs further study of high-order cluster integrals and, especially, their real dependence on the system volume (density).
On the Lighthill relationship and sound generation from isotropic turbulence
NASA Technical Reports Server (NTRS)
Zhou, YE; Praskovsky, Alexander; Oncley, Steven
1994-01-01
In 1952, Lighthill developed a theory for determining the sound generated by a turbulent motion of a fluid. With some statistical assumptions, Proudman applied this theory to estimate the acoustic power of isotropic turbulence. Recently, Lighthill established a simple relationship that relates the fourth-order retarded time and space covariance of his stress tensor to the corresponding second-order covariance and the turbulent flatness factor, without making statistical assumptions for a homogeneous turbulence. Lilley revisited Proudman's work and applied the Lighthill relationship to evaluate directly the radiated acoustic power from isotropic turbulence. After choosing the time separation dependence in the two-point velocity time and space covariance based on the insights gained from direct numerical simulations, Lilley concluded that the Proudman constant is determined by the turbulent flatness factor and the second-order spatial velocity covariance. In order to estimate the Proudman constant at high Reynolds numbers, we analyzed a unique data set of measurements in a large wind tunnel and atmospheric surface layer that covers a range of the Taylor microscale based on Reynolds numbers 2.0 x 10(exp 3) less than or equal to R(sub lambda) less than or equal to 12.7 x 10(exp 3). Our measurements demonstrate that the Lighthill relationship is a good approximation, providing additional support to Lilley's approach. The flatness factor is found between 2.7 - 3.3 and the second order spatial velocity covariance is obtained. Based on these experimental data, the Proudman constant is estimated to be 0.68 - 3.68.
NASA Astrophysics Data System (ADS)
Josey, C.; Forget, B.; Smith, K.
2017-12-01
This paper introduces two families of A-stable algorithms for the integration of y‧ = F (y , t) y: the extended predictor-corrector (EPC) and the exponential-linear (EL) methods. The structure of the algorithm families are described, and the method of derivation of the coefficients presented. The new algorithms are then tested on a simple deterministic problem and a Monte Carlo isotopic evolution problem. The EPC family is shown to be only second order for systems of ODEs. However, the EPC-RK45 algorithm had the highest accuracy on the Monte Carlo test, requiring at least a factor of 2 fewer function evaluations to achieve a given accuracy than a second order predictor-corrector method (center extrapolation / center midpoint method) with regards to Gd-157 concentration. Members of the EL family can be derived to at least fourth order. The EL3 and the EL4 algorithms presented are shown to be third and fourth order respectively on the systems of ODE test. In the Monte Carlo test, these methods did not overtake the accuracy of EPC methods before statistical uncertainty dominated the error. The statistical properties of the algorithms were also analyzed during the Monte Carlo problem. The new methods are shown to yield smaller standard deviations on final quantities as compared to the reference predictor-corrector method, by up to a factor of 1.4.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
Collins, Ryan L; Hu, Ting; Wejse, Christian; Sirugo, Giorgio; Williams, Scott M; Moore, Jason H
2013-02-18
Identifying high-order genetics associations with non-additive (i.e. epistatic) effects in population-based studies of common human diseases is a computational challenge. Multifactor dimensionality reduction (MDR) is a machine learning method that was designed specifically for this problem. The goal of the present study was to apply MDR to mining high-order epistatic interactions in a population-based genetic study of tuberculosis (TB). The study used a previously published data set consisting of 19 candidate single-nucleotide polymorphisms (SNPs) in 321 pulmonary TB cases and 347 healthy controls from Guniea-Bissau in Africa. The ReliefF algorithm was applied first to generate a smaller set of the five most informative SNPs. MDR with 10-fold cross-validation was then applied to look at all possible combinations of two, three, four and five SNPs. The MDR model with the best testing accuracy (TA) consisted of SNPs rs2305619, rs187084, and rs11465421 (TA = 0.588) in PTX3, TLR9 and DC-Sign, respectively. A general 1000-fold permutation test of the null hypothesis of no association confirmed the statistical significance of the model (p = 0.008). An additional 1000-fold permutation test designed specifically to test the linear null hypothesis that the association effects are only additive confirmed the presence of non-additive (i.e. nonlinear) or epistatic effects (p = 0.013). An independent information-gain measure corroborated these results with a third-order epistatic interaction that was stronger than any lower-order associations. We have identified statistically significant evidence for a three-way epistatic interaction that is associated with susceptibility to TB. This interaction is stronger than any previously described one-way or two-way associations. This study highlights the importance of using machine learning methods that are designed to embrace, rather than ignore, the complexity of common diseases such as TB. We recommend future studies of the genetics of TB take into account the possibility that high-order epistatic interactions might play an important role in disease susceptibility.
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
Reconnection properties in Kelvin-Helmholtz instabilities
NASA Astrophysics Data System (ADS)
Vernisse, Y.; Lavraud, B.; Eriksson, S.; Gershman, D. J.; Dorelli, J.; Pollock, C. J.; Giles, B. L.; Aunai, N.; Avanov, L. A.; Burch, J.; Chandler, M. O.; Coffey, V. N.; Dargent, J.; Ergun, R.; Farrugia, C. J.; Genot, V. N.; Graham, D.; Hasegawa, H.; Jacquey, C.; Kacem, I.; Khotyaintsev, Y. V.; Li, W.; Magnes, W.; Marchaudon, A.; Moore, T. E.; Paterson, W. R.; Penou, E.; Phan, T.; Retino, A.; Schwartz, S. J.; Saito, Y.; Sauvaud, J. A.; Schiff, C.; Torbert, R. B.; Wilder, F. D.; Yokota, S.
2017-12-01
Kelvin-Helmholtz instabilities are particular laboratories to study strong guide field reconnection processes. In particular, unlike the usual dayside magnetopause, the conditions across the magnetopause in KH vortices are quasi-symmetric, with low differences in beta and magnetic shear angle. We study these properties by means of statistical analysis of the high-resolution data of the Magnetospheric Multiscale mission. Several events of Kelvin-Helmholtz instabilities pas the terminator plane and a long lasting dayside instabilities event where used in order to produce this statistical analysis. Early results present a consistency between the data and the theory. In addition, the results emphasize the importance of the thickness of the magnetopause as a driver of magnetic reconnection in low magnetic shear events.
GPU-computing in econophysics and statistical physics
NASA Astrophysics Data System (ADS)
Preis, T.
2011-03-01
A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.
Supplementing land-use statistics with landscape metrics: some methodological considerations.
Herzog, F; Lausch, A
2001-11-01
Landscape monitoring usually relies on land-use statistics which reflect the share of land-sue/land cover types. In order to understand the functioning of landscapes, landscape pattern must be considered as well. Indicators which address the spatial configuration of landscapes are therefore needed. The suitability of landscape metrics, which are computed from the type, geometry and arrangement of patches, is examined. Two case studies in a surface mining region show that landscape metrics capture landscape structure but are highly dependent on the data model and on the methods of data analysis. For landscape metrics to become part of policy-relevant sets of environmental indicators, standardised procedures for their computation from remote sensing images must be developed.
Petsch, Harold E.
1979-01-01
Statistical summaries of daily streamflow data for 189 stations west of the Continental Divide in Colorado are presented in this report. Duration tables, high-flow sequence tables, and low-flow sequence tables provide information about daily mean discharge. The mean, variance, standard deviation, skewness, and coefficient of variation are provided for monthly and annual flows. Percentages of average flow are provided for monthly flows and first-order serial-correlation coefficients are provided for annual flows. The text explain the nature and derivation of the data and illustrates applications of the tabulated information by examples. The data may be used by agencies and individuals engaged in water studies. (USGS)
A statistical survey of heat input parameters into the cusp thermosphere
NASA Astrophysics Data System (ADS)
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
Nonparametric Bayesian predictive distributions for future order statistics
Richard A. Johnson; James W. Evans; David W. Green
1999-01-01
We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...
Determination of astaxanthin in Haematococcus pluvialis by first-order derivative spectrophotometry.
Liu, Xiao Juan; Juan, Liu Xiao; Wu, Ying Hua; Hua, Wu Ying; Zhao, Li Chao; Chao, Zhao Li; Xiao, Su Yao; Yao, Xiao Su; Zhou, Ai Mei; Mei, Zhou Ai; Liu, Xin; Xin, Liu
2011-01-01
A highly selective, convenient, and precise method, first-order derivative spectrophotometry, was applied for the determination of astaxanthin in Haematococcus pluvialis. Ethyl acetate and ethanol (1:1, v/v) were found to be the best extraction solvent tested due to their high efficiency and low toxicity compared with nine other organic solvents. Astaxanthin coexisting with chlorophyll and beta-carotene was analyzed by first-order derivative spectrophotometry in order to optimize the conditions for the determination of astaxanthin. The results show that when detected at 432 nm, the interfering substances could be eliminated. The dynamic linear range was 2.0-8.0 microg/mL, with a correlation coefficient of 0.9916. The detection threshold was 0.41 microg/mL. The RSD for the determination of astaxanthin was in the range of 0.01-0.06%; the results of recovery test were 98.1-108.0%. The statistical analysis between first-order derivative spectrophotometry and HPLC by T-testing did not exceed their critical values, revealing no significant differences between these two methods. It was proved that first-order derivative spectrophotometry is a rapid and convenient method for the determination of astaxanthin in H. pluvialis that can eliminate the negative effect resulting from the coexistence of astaxanthin with chlorophyll and beta-carotene.
Monitoring the soil degradation by Metastatistical Analysis
NASA Astrophysics Data System (ADS)
Oleschko, K.; Gaona, C.; Tarquis, A.
2009-04-01
The effectiveness of fractal toolbox to capture the critical behavior of soil structural patterns during the chemical and physical degradation was documented by our numerous experiments (Oleschko et al., 2008 a; 2008 b). The spatio-temporal dynamics of these patterns was measured and mapped with high precision in terms of fractal descriptors. All tested fractal techniques were able to detect the statistically significant differences in structure between the perfect spongy and massive patterns of uncultivated and sodium-saline agricultural soils, respectively. For instance, the Hurst exponent, extracted from the Chernozeḿ micromorphological images and from the time series of its physical and mechanical properties measured in situ, detected the roughness decrease (and therefore the increase in H - from 0.17 to 0.30 for images) derived from the loss of original structure complexity. The combined use of different fractal descriptors brings statistical precision into the quantification of natural system degradation and provides a means for objective soil structure comparison (Oleschko et al., 2000). The ability of fractal parameters to capture critical behavior and phase transition was documented for different contrasting situations, including from Andosols deforestation and erosion, to Vertisols high fructuring and consolidation. The Hurst exponent is used to measure the type of persistence and degree of complexity of structure dynamics. We conclude that there is an urgent need to select and adopt a standardized toolbox for fractal analysis and complexity measures in Earth Sciences. We propose to use the second-order (meta-) statistics as subtle measures of complexity (Atmanspacher et al., 1997). The high degree of correlation was documented between the fractal and high-order statistical descriptors (four central moments of stochastic variable distribution) used to the system heterogeneity and variability analysis. We proposed to call this combined fractal/statistical toolbox Metastatistical Analysis and recommend it to the projects directed to soil degradation monitoring. References: 1. Oleschko, K., B.S. Figueroa, M.E. Miranda, M.A. Vuelvas and E.R. Solleiro, Soil & Till. Res. 55, 43 (2000). 2. Oleschko, K., Korvin, G., Figueroa S. B., Vuelvas, M.A., Balankin, A., Flores L., Carreño, D. Fractal radar scattering from soil. Physical Review E.67, 041403, 2003. 3. Zamora-Castro S., Oleschko, K. Flores, L., Ventura, E. Jr., Parrot, J.-F., 2008. Fractal mapping of pore and solids attributes. Vadose Zone Journal, v. 7, Issue2: 473-492. 4. Oleschko, K., Korvin, G., Muñoz, A., Velásquez, J., Miranda, M.E., Carreon, D., Flores, L., Martínez, M., Velásquez-Valle, M., Brambilla, F., Parrot, J.-F. Ronquillo, G., 2008. Fractal mapping of soil moisture content from remote sensed multi-scale data. Nonlinear Proceses in Geophysics Journal, 15: 711-725. 5. Atmanspacher, H., Räth, Ch., Wiedenmann, G., 1997. Statistics and meta-statistics in the concept of complexity. Physica A, 234: 819-829.
Statistics of contractive cracking patterns. [frozen soil-water rheology
NASA Technical Reports Server (NTRS)
Noever, David A.
1991-01-01
The statistics of convective soil patterns are analyzed using statistical crystallography. An underlying hierarchy of order is found to span four orders of magnitude in characteristic pattern length. Strict mathematical requirements determine the two-dimensional (2D) topology, such that random partitioning of space yields a predictable statistical geometry for polygons. For all lengths, Aboav's and Lewis's laws are verified; this result is consistent both with the need to fill 2D space and most significantly with energy carried not by the patterns' interior, but by the boundaries. Together, this suggests a common mechanism of formation for both micro- and macro-freezing patterns.
On the streaming model for redshift-space distortions
NASA Astrophysics Data System (ADS)
Kuruvilla, Joseph; Porciani, Cristiano
2018-06-01
The streaming model describes the mapping between real and redshift space for 2-point clustering statistics. Its key element is the probability density function (PDF) of line-of-sight pairwise peculiar velocities. Following a kinetic-theory approach, we derive the fundamental equations of the streaming model for ordered and unordered pairs. In the first case, we recover the classic equation while we demonstrate that modifications are necessary for unordered pairs. We then discuss several statistical properties of the pairwise velocities for DM particles and haloes by using a suite of high-resolution N-body simulations. We test the often used Gaussian ansatz for the PDF of pairwise velocities and discuss its limitations. Finally, we introduce a mixture of Gaussians which is known in statistics as the generalised hyperbolic distribution and show that it provides an accurate fit to the PDF. Once inserted in the streaming equation, the fit yields an excellent description of redshift-space correlations at all scales that vastly outperforms the Gaussian and exponential approximations. Using a principal-component analysis, we reduce the complexity of our model for large redshift-space separations. Our results increase the robustness of studies of anisotropic galaxy clustering and are useful for extending them towards smaller scales in order to test theories of gravity and interacting dark-energy models.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis
Lin, Johnny; Bentler, Peter M.
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511
Exact extreme-value statistics at mixed-order transitions.
Bar, Amir; Majumdar, Satya N; Schehr, Grégory; Mukamel, David
2016-05-01
We study extreme-value statistics for spatially extended models exhibiting mixed-order phase transitions (MOT). These are phase transitions that exhibit features common to both first-order (discontinuity of the order parameter) and second-order (diverging correlation length) transitions. We consider here the truncated inverse distance squared Ising model, which is a prototypical model exhibiting MOT, and study analytically the extreme-value statistics of the domain lengths The lengths of the domains are identically distributed random variables except for the global constraint that their sum equals the total system size L. In addition, the number of such domains is also a fluctuating variable, and not fixed. In the paramagnetic phase, we show that the distribution of the largest domain length l_{max} converges, in the large L limit, to a Gumbel distribution. However, at the critical point (for a certain range of parameters) and in the ferromagnetic phase, we show that the fluctuations of l_{max} are governed by novel distributions, which we compute exactly. Our main analytical results are verified by numerical simulations.
On use of the multistage dose-response model for assessing laboratory animal carcinogenicity
Nitcheva, Daniella; Piegorsch, Walter W.; West, R. Webster
2007-01-01
We explore how well a statistical multistage model describes dose-response patterns in laboratory animal carcinogenicity experiments from a large database of quantal response data. The data are collected from the U.S. EPA’s publicly available IRIS data warehouse and examined statistically to determine how often higher-order values in the multistage predictor yield significant improvements in explanatory power over lower-order values. Our results suggest that the addition of a second-order parameter to the model only improves the fit about 20% of the time, while adding even higher-order terms apparently does not contribute to the fit at all, at least with the study designs we captured in the IRIS database. Also included is an examination of statistical tests for assessing significance of higher-order terms in a multistage dose-response model. It is noted that bootstrap testing methodology appears to offer greater stability for performing the hypothesis tests than a more-common, but possibly unstable, “Wald” test. PMID:17490794
The characteristics of low-speed streaks in the near-wall region of a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Smith, C. R.; Metzler, S. P.
1983-04-01
The discovery of an instantaneous spanwise velocity distribution consisting of alternative zones of high- and low-speed fluid which develop in the viscous sublayer and extend into the logarithmic region was one of the first clues to the existence of an ordered structure within a turbulent boundary layer. The present investigation is concerned with quantitative flow-visualization results obtained with the aid of a high-speed video flow visualization system which permits the detailed visual examination of both the statistics and characteristics of low-speed streaks over a much wider range of Reynolds numbers than has been possible before. Attention is given to streak appearance, mean streak spacing, the spanwise distribution of streaks, streak persistence, and aspects of streak merging and intermittency. The results indicate that the statistical characteristics of the spanwise spacing of low-speed streaks are essentially invariant with Reynolds number.
Measuring and partitioning the high-order linkage disequilibrium by multiple order Markov chains.
Kim, Yunjung; Feng, Sheng; Zeng, Zhao-Bang
2008-05-01
A map of the background levels of disequilibrium between nearby markers can be useful for association mapping studies. In order to assess the background levels of linkage disequilibrium (LD), multilocus LD measures are more advantageous than pairwise LD measures because the combined analysis of pairwise LD measures is not adequate to detect simultaneous allele associations among multiple markers. Various multilocus LD measures based on haplotypes have been proposed. However, most of these measures provide a single index of association among multiple markers and does not reveal the complex patterns and different levels of LD structure. In this paper, we employ non-homogeneous, multiple order Markov Chain models as a statistical framework to measure and partition the LD among multiple markers into components due to different orders of marker associations. Using a sliding window of multiple markers on phased haplotype data, we compute corresponding likelihoods for different Markov Chain (MC) orders in each window. The log-likelihood difference between the lowest MC order model (MC0) and the highest MC order model in each window is used as a measure of the total LD or the overall deviation from the gametic equilibrium for the window. Then, we partition the total LD into lower order disequilibria and estimate the effects from two-, three-, and higher order disequilibria. The relationship between different orders of LD and the log-likelihood difference involving two different orders of MC models are explored. By applying our method to the phased haplotype data in the ENCODE regions of the HapMap project, we are able to identify high/low multilocus LD regions. Our results reveal that the most LD in the HapMap data is attributed to the LD between adjacent pairs of markers across the whole region. LD between adjacent pairs of markers appears to be more significant in high multilocus LD regions than in low multilocus LD regions. We also find that as the multilocus total LD increases, the effects of high-order LD tends to get weaker due to the lack of observed multilocus haplotypes. The overall estimates of first, second, third, and fourth order LD across the ENCODE regions are 64, 23, 9, and 3%.
NASA Astrophysics Data System (ADS)
Brizzi, S.; Sandri, L.; Funiciello, F.; Corbi, F.; Piromallo, C.; Heuret, A.
2018-03-01
The observed maximum magnitude of subduction megathrust earthquakes is highly variable worldwide. One key question is which conditions, if any, favor the occurrence of giant earthquakes (Mw ≥ 8.5). Here we carry out a multivariate statistical study in order to investigate the factors affecting the maximum magnitude of subduction megathrust earthquakes. We find that the trench-parallel extent of subduction zones and the thickness of trench sediments provide the largest discriminating capability between subduction zones that have experienced giant earthquakes and those having significantly lower maximum magnitude. Monte Carlo simulations show that the observed spatial distribution of giant earthquakes cannot be explained by pure chance to a statistically significant level. We suggest that the combination of a long subduction zone with thick trench sediments likely promotes a great lateral rupture propagation, characteristic of almost all giant earthquakes.
Mickenautsch, Steffen; Yengopal, Veerasamy
2015-01-01
Purpose Traditionally, resin composite restorations are claimed by reviews of the dental literature as being superior to glass-ionomer fillings in terms of restoration failures in posterior permanent teeth. The aim of this systematic review is to answer the clinical question, whether conventional high-viscosity glass-ionomer restorations, in patients with single and/or multi-surface cavities in posterior permanent teeth, have indeed a higher failure rate than direct hybrid resin composite restorations. Methods Eight databases were searched until December 02, 2013. Trials were assessed for bias risks, in-between datasets heterogeneity and statistical sample size power. Effects sizes were computed and statistically compared. A total of 55 citations were identified through systematic literature search. From these, 46 were excluded. No trials related to high-viscosity glass-ionomers versus resin composite restorations for direct head-to-head comparison were found. Three trials related to high-viscosity glass-ionomers versus amalgam and three trials related to resin composite versus amalgam restorations could be included for adjusted indirect comparison, only. Results The available evidence suggests no difference in the failure rates between both types of restoration beyond the play of chance, is limited by lack of head-to-head comparisons and an insufficient number of trials, as well as by high bias and in-between-dataset heterogeneity risk. The current clinical evidence needs to be regarded as too poor in order to justify superiority claims regarding the failure rates of both restoration types. Sufficiently large-sized, parallel-group, randomised control trials with high internal validity are needed, in order to justify any clinically meaningful judgment to this topic. PMID:26962372
Comparing international crash statistics
DOT National Transportation Integrated Search
1999-12-01
In order to examine national developments in traffic safety, crash statistics from several of the more safety, crash statistics from several of the more United States. Data obtained from the Fatality Analysis Reporting System (FARS) and the Internati...
Regional Monitoring of Cervical Cancer.
Crisan-Vida, Mihaela; Lupse, Oana Sorina; Stoicu-Tivadar, Lacramioara; Salvari, Daniela; Catanet, Radu; Bernad, Elena
2017-01-01
Cervical cancer is one of the most important causes of death in women in fertile age in Romania. In order to discover high-risk situations in the first stages of the disease it is important to enhance prevention actions, and ICT, respectively cloud computing and Big Data currently support such activities. The national screening program uses an information system that based on data from different medical units gives feedback related to the women healthcare status and provides statistics and reports. In order to ensure the continuity of care it is updated with HL7 CDA support and cloud computing. The current paper presents the solution and several results.
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
Prototyping with Data Dictionaries for Requirements Analysis.
1985-03-01
statistical packages and software for screen layout. These items work at a higher level than another category of prototyping tool, program generators... Program generators are software packages which, when given specifications, produce source listings, usually in a high order language such as COBCL...with users and this will not happen if he must stop to develcp a detailed program . [Ref. 241] Hardware as well as software should be considered in
Nondestructive ultrasonic testing of materials
Hildebrand, Bernard P.
1994-01-01
Reflection wave forms obtained from aged and unaged material samples can be compared in order to indicate trends toward age-related flaws. Statistical comparison of a large number of data points from such wave forms can indicate changes in the microstructure of the material due to aging. The process is useful for predicting when flaws may occur in structural elements of high risk structures such as nuclear power plants, airplanes, and bridges.
Nondestructive ultrasonic testing of materials
Hildebrand, B.P.
1994-08-02
Reflection wave forms obtained from aged and unaged material samples can be compared in order to indicate trends toward age-related flaws. Statistical comparison of a large number of data points from such wave forms can indicate changes in the microstructure of the material due to aging. The process is useful for predicting when flaws may occur in structural elements of high risk structures such as nuclear power plants, airplanes, and bridges. 4 figs.
Routine Discovery of Complex Genetic Models using Genetic Algorithms
Moore, Jason H.; Hahn, Lance W.; Ritchie, Marylyn D.; Thornton, Tricia A.; White, Bill C.
2010-01-01
Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes. PMID:20948983
Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E.
2017-01-01
There is accumulating evidence that the brain’s neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (< 2 Hz) and more sharply for high (>2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals. PMID:28575032
Lin, Karl K; Rahman, Mohammad A
2018-05-21
Interest has been expressed in using a joint test procedure that requires that the results of both a trend test and a pairwise comparison test between the control and the high groups be statistically significant simultaneously at the levels of significance recommended in the FDA 2001 draft guidance for industry document for the separate tests in order for the drug effect on the development of an individual tumor type to be considered as statistically significant. Results of our simulation studies show that there is a serious consequence of large inflations of the false negative rate through large decreases of false positive rate in the use of the above joint test procedure in the final interpretation of the carcinogenicity potential of a new drug if the levels of significance recommended for separate tests are used. The inflation can be as high as 204.5% of the false negative rate when the trend test alone is required to test if the effect is statistically significant. To correct the problem, new sets of levels of significance have also been developed for those who want to use the joint test in reviews of carcinogenicity studies.
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
NASA Astrophysics Data System (ADS)
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Optical diagnosis of cervical cancer by higher order spectra and boosting
NASA Astrophysics Data System (ADS)
Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Barman, Ritwik; Pratiher, Souvik; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2017-03-01
In this contribution, we report the application of higher order statistical moments using decision tree and ensemble based learning methodology for the development of diagnostic algorithms for optical diagnosis of cancer. The classification results were compared to those obtained with an independent feature extractors like linear discriminant analysis (LDA). The performance and efficacy of these methodology using higher order statistics as a classifier using boosting has higher specificity and sensitivity while being much faster as compared to other time-frequency domain based methods.
On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs
NASA Technical Reports Server (NTRS)
Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.
Test order in teacher-rated behavior assessments: Is counterbalancing necessary?
Kooken, Janice; Welsh, Megan E; McCoach, D Betsy; Miller, Faith G; Chafouleas, Sandra M; Riley-Tillman, T Chris; Fabiano, Gregory
2017-01-01
Counterbalancing treatment order in experimental research design is well established as an option to reduce threats to internal validity, but in educational and psychological research, the effect of varying the order of multiple tests to a single rater has not been examined and is rarely adhered to in practice. The current study examines the effect of test order on measures of student behavior by teachers as raters utilizing data from a behavior measure validation study. Using multilevel modeling to control for students nested within teachers, the effect of rating an earlier measure on the intercept or slope of a later behavior assessment was statistically significant in 22% of predictor main effects for the spring test period. Test order effects had potential for high stakes consequences with differences large enough to change risk classification. Results suggest that researchers and practitioners in classroom settings using multiple measures evaluate the potential impact of test order. Where possible, they should counterbalance when the risk of an order effect exists and report justification for the decision to not counterbalance. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
How daylight influences high-order chromatic descriptors in natural images.
Ojeda, Juan; Nieves, Juan Luis; Romero, Javier
2017-07-01
Despite the global and local daylight changes naturally occurring in natural scenes, the human visual system usually adapts quite well to those changes, developing a stable color perception. Nevertheless, the influence of daylight in modeling natural image statistics is not fully understood and has received little attention. The aim of this work was to analyze the influence of daylight changes in different high-order chromatic descriptors (i.e., color volume, color gamut, and number of discernible colors) derived from 350 color images, which were rendered under 108 natural illuminants with Correlated Color Temperatures (CCT) from 2735 to 25,889 K. Results suggest that chromatic and luminance information is almost constant and does not depend on the CCT of the illuminant for values above 14,000 K. Nevertheless, differences between the red-green and blue-yellow image components were found below that CCT, with most of the statistical descriptors analyzed showing local extremes in the range 2950 K-6300 K. Uniform regions and areas of the images attracting observers' attention were also considered in this analysis and were characterized by their patchiness index and their saliency maps. Meanwhile, the results of the patchiness index do not show a clear dependence on CCT, and it is remarkable that a significant reduction in the number of discernible colors (58% on average) was found when the images were masked with their corresponding saliency maps. Our results suggest that chromatic diversity, as defined in terms of the discernible colors, can be strongly reduced when an observer scans a natural scene. These findings support the idea that a reduction in the number of discernible colors will guide visual saliency and attention. Whatever the modeling is mediating the neural representation of natural images, natural image statistics, it is clear that natural image statistics should take into account those local maxima and minima depending on the daylight illumination and the reduction of the number of discernible colors when salient regions are considered.
A New Statistic for Evaluating Item Response Theory Models for Ordinal Data. CRESST Report 839
ERIC Educational Resources Information Center
Cai, Li; Monroe, Scott
2014-01-01
We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…
NASA Astrophysics Data System (ADS)
Raghupathy, Arun; Ghia, Karman; Ghia, Urmila
2008-11-01
Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.
Hydrometeorological and statistical analyses of heavy rainfall in Midwestern USA
NASA Astrophysics Data System (ADS)
Thorndahl, S.; Smith, J. A.; Krajewski, W. F.
2012-04-01
During the last two decades the mid-western states of the United States of America has been largely afflicted by heavy flood producing rainfall. Several of these storms seem to have similar hydrometeorological properties in terms of pattern, track, evolution, life cycle, clustering, etc. which raise the question if it is possible to derive general characteristics of the space-time structures of these heavy storms. This is important in order to understand hydrometeorological features, e.g. how storms evolve and with what frequency we can expect extreme storms to occur. In the literature, most studies of extreme rainfall are based on point measurements (rain gauges). However, with high resolution and quality radar observation periods exceeding more than two decades, it is possible to do long-term spatio-temporal statistical analyses of extremes. This makes it possible to link return periods to distributed rainfall estimates and to study precipitation structures which cause floods. However, doing these statistical frequency analyses of rainfall based on radar observations introduces some different challenges, converting radar reflectivity observations to "true" rainfall, which are not problematic doing traditional analyses on rain gauge data. It is for example difficult to distinguish reflectivity from high intensity rain from reflectivity from other hydrometeors such as hail, especially using single polarization radars which are used in this study. Furthermore, reflectivity from bright band (melting layer) should be discarded and anomalous propagation should be corrected in order to produce valid statistics of extreme radar rainfall. Other challenges include combining observations from several radars to one mosaic, bias correction against rain gauges, range correction, ZR-relationships, etc. The present study analyzes radar rainfall observations from 1996 to 2011 based the American NEXRAD network of radars over an area covering parts of Iowa, Wisconsin, Illinois, and Lake Michigan. The radar observations are processed using Hydro-NEXRAD algorithms in order to produce rainfall estimates with a spatial resolution of 1 km and a temporal resolution of 15 min. The rainfall estimates are bias-corrected on a daily basis using a network of rain gauges. Besides a thorough evaluation of the different challenges in investigating heavy rain as described above the study includes suggestions for frequency analysis methods as well as studies of hydrometeorological features of single events.
Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions
Carlsson, Boris; Forssen, Christian; Fahlin Strömberg, D.; ...
2016-02-24
Chiral effective field theory ( ΧEFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, EFT is able to provide well-founded estimates of statistical and systematic uncertainties | although this unique advantage has not yet been fully exploited. We ll this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous t to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methodsmore » that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain e cient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to EFT and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling showing that statistical errors are in general small compared to systematic ones. In conclusion, we find that a simultaneous t to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector; in particlar when varying the cutoff in the chiral potentials. The methodology and results presented in this Paper open a new frontier for uncertainty quantification in ab initio nuclear theory.« less
Robust functional statistics applied to Probability Density Function shape screening of sEMG data.
Boudaoud, S; Rix, H; Al Harrach, M; Marin, F
2014-01-01
Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.
NASA Astrophysics Data System (ADS)
Sanatkhani, Soroosh; Menon, Prahlad G.
2018-03-01
Left atrial appendage (LAA) is the source of 91% of the thrombi in patients with atrial arrhythmias ( 2.3 million US adults), turning this region into a potential threat for stroke. LAA geometries have been clinically categorized into four appearance groups viz. Cauliflower, Cactus, Chicken-Wing and WindSock, based on visual appearance in 3D volume visualizations of contrast-enhanced computed tomography (CT) imaging, and have further been correlated with stroke risk by considering clinical mortality statistics. However, such classification from visual appearance is limited by human subjectivity and is not sophisticated enough to address all the characteristics of the geometries. Quantification of LAA geometry metrics can reveal a more repeatable and reliable estimate on the characteristics of the LAA which correspond with stasis risk, and in-turn cardioembolic risk. We present an approach to quantify the appearance of the LAA in patients in atrial fibrillation (AF) using a weighted set of baseline eigen-modes of LAA appearance variation, as a means to objectify classification of patient-specific LAAs into the four accepted clinical appearance groups. Clinical images of 16 patients (4 per LAA appearance category) with atrial fibrillation (AF) were identified and visualized as volume images. All the volume images were rigidly reoriented in order to be spatially co-registered, normalized in terms of intensity, resampled and finally reshaped appropriately to carry out principal component analysis (PCA), in order to parametrize the LAA region's appearance based on principal components (PCs/eigen mode) of greyscale appearance, generating 16 eigen-modes of appearance variation. Our pilot studies show that the most dominant LAA appearance (i.e. reconstructable using the fewest eigen-modes) resembles the Chicken-Wing class, which is known to have the lowest stroke risk per clinical mortality statistics. Our findings indicate the possibility that LAA geometries with high risk of stroke are higher-order statistical variants of underlying lower risk shapes.
NASA Astrophysics Data System (ADS)
Skitka, J.; Marston, B.; Fox-Kemper, B.
2016-02-01
Sub-grid turbulence models for planetary boundary layers are typically constructed additively, starting with local flow properties and including non-local (KPP) or higher order (Mellor-Yamada) parameters until a desired level of predictive capacity is achieved or a manageable threshold of complexity is surpassed. Such approaches are necessarily limited in general circumstances, like global circulation models, by their being optimized for particular flow phenomena. By building a model reductively, starting with the infinite hierarchy of turbulence statistics, truncating at a given order, and stripping degrees of freedom from the flow, we offer the prospect a turbulence model and investigative tool that is equally applicable to all flow types and able to take full advantage of the wealth of nonlocal information in any flow. Direct statistical simulation (DSS) that is based upon expansion in equal-time cumulants can be used to compute flow statistics of arbitrary order. We investigate the feasibility of a second-order closure (CE2) by performing simulations of the ocean boundary layer in a quasi-linear approximation for which CE2 is exact. As oceanographic examples, wind-driven Langmuir turbulence and thermal convection are studied by comparison of the quasi-linear and fully nonlinear statistics. We also characterize the computational advantages and physical uncertainties of CE2 defined on a reduced basis determined via proper orthogonal decomposition (POD) of the flow fields.
NASA Astrophysics Data System (ADS)
Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.
2014-03-01
Benign radiation-induced lung injury is a common finding following stereotactic ablative radiotherapy (SABR) for lung cancer, and is often difficult to differentiate from a recurring tumour due to the ablative doses and highly conformal treatment with SABR. Current approaches to treatment response assessment have shown limited ability to predict recurrence within 6 months of treatment. The purpose of our study was to evaluate the accuracy of second order texture statistics for prediction of eventual recurrence based on computed tomography (CT) images acquired within 6 months of treatment, and compare with the performance of first order appearance and lesion size measures. Consolidative and ground-glass opacity (GGO) regions were manually delineated on post-SABR CT images. Automatic consolidation expansion was also investigated to act as a surrogate for GGO position. The top features for prediction of recurrence were all texture features within the GGO and included energy, entropy, correlation, inertia, and first order texture (standard deviation of density). These predicted recurrence with 2-fold cross validation (CV) accuracies of 70-77% at 2- 5 months post-SABR, with energy, entropy, and first order texture having leave-one-out CV accuracies greater than 80%. Our results also suggest that automatic expansion of the consolidation region could eliminate the need for manual delineation, and produced reproducible results when compared to manually delineated GGO. If validated on a larger data set, this could lead to a clinically useful computer-aided diagnosis system for prediction of recurrence within 6 months of SABR and allow for early salvage therapy for patients with recurrence.
Ebqa'ai, Mohammad; Ibrahim, Bashar
2017-12-01
This study aims to analyse the heavy metal pollutants in Jeddah, the second largest city in the Gulf Cooperation Council with a population exceeding 3.5 million, and many vehicles. Ninety-eight street dust samples were collected seasonally from the six major roads as well as the Jeddah Beach, and subsequently digested using modified Leeds Public Analyst method. The heavy metals (Fe, Zn, Mn, Cu, Cd, and Pb) were extracted from the ash using methyl isobutyl ketone as solvent extraction and eventually analysed by atomic absorption spectroscopy. Multivariate statistical techniques, principal component analysis (PCA), and hierarchical cluster analysis were applied to these data. Heavy metal concentrations were ranked according to the following descending order: Fe > Zn > Mn > Cu > Pb > Cd. In order to study the pollution and health risk from these heavy metals as well as estimating their effect on the environment, pollution indices, integrated pollution index, enrichment factor, daily dose average, hazard quotient, and hazard index were all analysed. The PCA showed high levels of Zn, Fe, and Cd in Al Kurnish road, while these elements were consistently detected on King Abdulaziz and Al Madina roads. The study indicates that high levels of Zn and Pb pollution were recorded for major roads in Jeddah. Six out of seven roads had high pollution indices. This study is the first step towards further investigations into current health problems in Jeddah, such as anaemia and asthma.
HYPOTHESIS SETTING AND ORDER STATISTIC FOR ROBUST GENOMIC META-ANALYSIS.
Song, Chi; Tseng, George C
2014-01-01
Meta-analysis techniques have been widely developed and applied in genomic applications, especially for combining multiple transcriptomic studies. In this paper, we propose an order statistic of p-values ( r th ordered p-value, rOP) across combined studies as the test statistic. We illustrate different hypothesis settings that detect gene markers differentially expressed (DE) "in all studies", "in the majority of studies", or "in one or more studies", and specify rOP as a suitable method for detecting DE genes "in the majority of studies". We develop methods to estimate the parameter r in rOP for real applications. Statistical properties such as its asymptotic behavior and a one-sided testing correction for detecting markers of concordant expression changes are explored. Power calculation and simulation show better performance of rOP compared to classical Fisher's method, Stouffer's method, minimum p-value method and maximum p-value method under the focused hypothesis setting. Theoretically, rOP is found connected to the naïve vote counting method and can be viewed as a generalized form of vote counting with better statistical properties. The method is applied to three microarray meta-analysis examples including major depressive disorder, brain cancer and diabetes. The results demonstrate rOP as a more generalizable, robust and sensitive statistical framework to detect disease-related markers.
Statistical Entropy of the G-H-S Black Hole to All Orders in Planck Length
NASA Astrophysics Data System (ADS)
Sun, Hangbin; He, Feng; Huang, Hai
2012-02-01
Considering corrections to all orders in Planck length on the quantum state density from generalized uncertainty principle, we calculate the statistical entropy of the scalar field near the horizon of Garfinkle-Horowitz-Strominger (G-H-S) black hole without any artificial cutoff. It is shown that the entropy is proportional to the horizon area.
Antweiler, Ronald C.; Taylor, Howard E.
2008-01-01
The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.
Effect of inlet conditions on the turbulent statistics in a buoyant jet
NASA Astrophysics Data System (ADS)
Kumar, Rajesh; Dewan, Anupam
2015-11-01
Buoyant jets have been the subject of research due to their technological and environmental importance in many physical processes, such as, spread of smoke and toxic gases from fires, release of gases form volcanic eruptions and industrial stacks. The nature of the flow near the source is initially laminar which quickly changes into turbulent flow. We present large eddy simulation of a buoyant jet. In the present study a careful investigation has been done to study the influence of inlet conditions at the source on the turbulent statistics far from the source. It has been observed that the influence of the initial conditions on the second-order buoyancy terms extends further in the axial direction from the source than their influence on the time-averaged flow and second-order velocity statistics. We have studied the evolution of vortical structures in the buoyant jet. It has been shown that the generation of helical vortex rings in the vicinity of the source around a laminar core could be the reason for the larger influence of the inlet conditions on the second-order buoyancy terms as compared to the second-order velocity statistics.
Establishing Consensus Turbulence Statistics for Hot Subsonic Jets
NASA Technical Reports Server (NTRS)
Bridges, James; Werner, Mark P.
2010-01-01
Many tasks in fluids engineering require knowledge of the turbulence in jets. There is a strong, although fragmented, literature base for low order statistics, such as jet spread and other meanvelocity field characteristics. Some sources, particularly for low speed cold jets, also provide turbulence intensities that are required for validating Reynolds-averaged Navier-Stokes (RANS) Computational Fluid Dynamics (CFD) codes. There are far fewer sources for jet spectra and for space-time correlations of turbulent velocity required for aeroacoustics applications, although there have been many singular publications with various unique statistics, such as Proper Orthogonal Decomposition, designed to uncover an underlying low-order dynamical description of turbulent jet flow. As the complexity of the statistic increases, the number of flows for which the data has been categorized and assembled decreases, making it difficult to systematically validate prediction codes that require high-level statistics over a broad range of jet flow conditions. For several years, researchers at NASA have worked on developing and validating jet noise prediction codes. One such class of codes, loosely called CFD-based or statistical methods, uses RANS CFD to predict jet mean and turbulent intensities in velocity and temperature. These flow quantities serve as the input to the acoustic source models and flow-sound interaction calculations that yield predictions of far-field jet noise. To develop this capability, a catalog of turbulent jet flows has been created with statistics ranging from mean velocity to space-time correlations of Reynolds stresses. The present document aims to document this catalog and to assess the accuracies of the data, e.g. establish uncertainties for the data. This paper covers the following five tasks: Document acquisition and processing procedures used to create the particle image velocimetry (PIV) datasets. Compare PIV data with hotwire and laser Doppler velocimetry (LDV) data published in the open literature. Compare different datasets acquired at roughly the same flow conditions to establish uncertainties. Create a consensus dataset for a range of hot jet flows, including uncertainty bands. Analyze this consensus dataset for self-consistency and compare jet characteristics to those of the open literature. One final objective fulfilled by this work was the demonstration of a universal scaling for the jet flow fields, at least within the region of interest to aeroacoustics. The potential core length and the spread rate of the half-velocity radius were used to collapse of the mean and turbulent velocity fields over the first 20 jet diameters in a highly satisfying manner.
Summary Statistics for Fun Dough Data Acquired at LLNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kallman, J S; Morales, K E; Whipple, R E
Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a Play Dough{trademark}-like product, Fun Dough{trademark}, designated as PD. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2100 LMHU{sub D} at 100kVp to a low of about 1100 LMHU{sub D} at 300kVp. The standard deviation of each measurement is around 1% of the mean. The entropy covers the range from 3.9 to 4.6. Ordinarily, we would model the LAC ofmore » the material and compare the modeled values to the measured values. In this case, however, we did not have the composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 8.5. LLNL prepared about 50mL of the Fun Dough{trademark} in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. Still, layers can plainly be seen in the reconstructed images, indicating that the bulk density of the material in the container is affected by voids and bubbles. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less
ERIC Educational Resources Information Center
SAW, J.G.
THIS PAPER DEALS WITH SOME TESTS OF HYPOTHESIS FREQUENTLY ENCOUNTERED IN THE ANALYSIS OF MULTIVARIATE DATA. THE TYPE OF HYPOTHESIS CONSIDERED IS THAT WHICH THE STATISTICIAN CAN ANSWER IN THE NEGATIVE OR AFFIRMATIVE. THE DOOLITTLE METHOD MAKES IT POSSIBLE TO EVALUATE THE DETERMINANT OF A MATRIX OF HIGH ORDER, TO SOLVE A MATRIX EQUATION, OR TO…
Reliability study of refractory gate gallium arsenide MESFETS
NASA Technical Reports Server (NTRS)
Yin, J. C. W.; Portnoy, W. M.
1981-01-01
Refractory gate MESFET's were fabricated as an alternative to aluminum gate devices, which have been found to be unreliable as RF power amplifiers. In order to determine the reliability of the new structures, statistics of failure and information about mechanisms of failure in refractory gate MESFET's are given. Test transistors were stressed under conditions of high temperature and forward gate current to enhance failure. Results of work at 150 C and 275 C are reported.
Reliability study of refractory gate gallium arsenide MESFETS
NASA Astrophysics Data System (ADS)
Yin, J. C. W.; Portnoy, W. M.
Refractory gate MESFET's were fabricated as an alternative to aluminum gate devices, which have been found to be unreliable as RF power amplifiers. In order to determine the reliability of the new structures, statistics of failure and information about mechanisms of failure in refractory gate MESFET's are given. Test transistors were stressed under conditions of high temperature and forward gate current to enhance failure. Results of work at 150 C and 275 C are reported.
2007-11-28
order to optimize pilot performance in the JSF tactical maneuvering environment • Binaural Capture and Synthesis of Ambient Soundscapes –Create a...technique for capturing and replicating ambient soundscapes , and use the technique to statistically model ambient soundscapes for a wide range of...Actuator (HTCA) • Binaural Capture and Synthesis of Ambient Soundscapes • High Temperature PM Actuator Motor • Manufacturing of New Active Noise
Statistical physics of the symmetric group.
Williams, Mobolaji
2017-04-01
Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.
Statistical physics of the symmetric group
NASA Astrophysics Data System (ADS)
Williams, Mobolaji
2017-04-01
Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.
Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting
NASA Astrophysics Data System (ADS)
Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD
2018-01-01
Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.
NASA Astrophysics Data System (ADS)
Zabolotna, Natalia I.; Radchenko, Kostiantyn O.; Karas, Oleksandr V.
2018-01-01
A fibroadenoma diagnosing of breast using statistical analysis (determination and analysis of statistical moments of the 1st-4th order) of the obtained polarization images of Jones matrix imaginary elements of the optically thin (attenuation coefficient τ <= 0,1 ) blood plasma films with further intellectual differentiation based on the method of "fuzzy" logic and discriminant analysis were proposed. The accuracy of the intellectual differentiation of blood plasma samples to the "norm" and "fibroadenoma" of breast was 82.7% by the method of linear discriminant analysis, and by the "fuzzy" logic method is 95.3%. The obtained results allow to confirm the potentially high level of reliability of the method of differentiation by "fuzzy" analysis.
Petsch, Harold E.
1979-01-01
Statistical summaries of daily streamflow data for 246 stations east of the Continental Divide in Colorado and adjacent States are presented in this report. Duration tables, high-flow sequence tables, and low-flow sequence tables provide information about daily mean discharge. The mean, variance, standard deviation, skewness, and coefficient of variation are provided for monthly and annual flows. Percentages of average flow are provided for monthly flows and first-order serial-correlation coefficients are provided for annual flows. The text explains the nature and derivation of the data and illustrates applications of the tabulated information by examples. The data may be used by agencies and individuals engaged in water studies. (USGS)
High-statistics measurement of the η →3 π0 decay at the Mainz Microtron
NASA Astrophysics Data System (ADS)
Prakhov, S.; Abt, S.; Achenbach, P.; Adlarson, P.; Afzal, F.; Aguar-Bartolomé, P.; Ahmed, Z.; Ahrens, J.; Annand, J. R. M.; Arends, H. J.; Bantawa, K.; Bashkanov, M.; Beck, R.; Biroth, M.; Borisov, N. S.; Braghieri, A.; Briscoe, W. J.; Cherepnya, S.; Cividini, F.; Collicott, C.; Costanza, S.; Denig, A.; Dieterle, M.; Downie, E. J.; Drexler, P.; Ferretti Bondy, M. I.; Fil'kov, L. V.; Fix, A.; Gardner, S.; Garni, S.; Glazier, D. I.; Gorodnov, I.; Gradl, W.; Gurevich, G. M.; Hamill, C. B.; Heijkenskjöld, L.; Hornidge, D.; Huber, G. M.; Käser, A.; Kashevarov, V. L.; Kay, S.; Keshelashvili, I.; Kondratiev, R.; Korolija, M.; Krusche, B.; Lazarev, A.; Lisin, V.; Livingston, K.; Lutterer, S.; MacGregor, I. J. D.; Manley, D. M.; Martel, P. P.; McGeorge, J. C.; Middleton, D. G.; Miskimen, R.; Mornacchi, E.; Mushkarenkov, A.; Neganov, A.; Neiser, A.; Oberle, M.; Ostrick, M.; Otte, P. B.; Paudyal, D.; Pedroni, P.; Polonski, A.; Ron, G.; Rostomyan, T.; Sarty, A.; Sfienti, C.; Sokhoyan, V.; Spieker, K.; Steffen, O.; Strakovsky, I. I.; Strandberg, B.; Strub, Th.; Supek, I.; Thiel, A.; Thiel, M.; Thomas, A.; Unverzagt, M.; Usov, Yu. A.; Wagner, S.; Walford, N. K.; Watts, D. P.; Werthmüller, D.; Wettig, J.; Witthauer, L.; Wolfes, M.; Zana, L. A.; A2 Collaboration at MAMI
2018-06-01
The largest, at the moment, statistics of 7 ×106η →3 π0 decays, based on 6.2 ×107η mesons produced in the γ p →η p reaction, has been accumulated by the A2 Collaboration at the Mainz Microtron, MAMI. It allowed a detailed study of the η →3 π0 dynamics beyond its conventional parametrization with just the quadratic slope parameter α and enabled, for the first time, a measurement of the second-order term and a better understanding of the cusp structure in the neutral decay. The present data are also compared to recent theoretical calculations that predict a nonlinear dependence along the quadratic distance from the Dalitz-plot center.
Garcia, Nathan S; Sexton, Julie; Riggins, Tracey; Brown, Jeff; Lomas, Michael W; Martiny, Adam C
2018-01-01
Current hypotheses suggest that cellular elemental stoichiometry of marine eukaryotic phytoplankton such as the ratios of cellular carbon:nitrogen:phosphorus (C:N:P) vary between phylogenetic groups. To investigate how phylogenetic structure, cell volume, growth rate, and temperature interact to affect the cellular elemental stoichiometry of marine eukaryotic phytoplankton, we examined the C:N:P composition in 30 isolates across 7 classes of marine phytoplankton that were grown with a sufficient supply of nutrients and nitrate as the nitrogen source. The isolates covered a wide range in cell volume (5 orders of magnitude), growth rate (<0.01-0.9 d -1 ), and habitat temperature (2-24°C). Our analysis indicates that C:N:P is highly variable, with statistical model residuals accounting for over half of the total variance and no relationship between phylogeny and elemental stoichiometry. Furthermore, our data indicated that variability in C:P, N:P, and C:N within Bacillariophyceae (diatoms) was as high as that among all of the isolates that we examined. In addition, a linear statistical model identified a positive relationship between diatom cell volume and C:P and N:P. Among all of the isolates that we examined, the statistical model identified temperature as a significant factor, consistent with the temperature-dependent translation efficiency model, but temperature only explained 5% of the total statistical model variance. While some of our results support data from previous field studies, the high variability of elemental ratios within Bacillariophyceae contradicts previous work that suggests that this cosmopolitan group of microalgae has consistently low C:P and N:P ratios in comparison with other groups.
Counting statistics of chaotic resonances at optical frequencies: Theory and experiments
NASA Astrophysics Data System (ADS)
Lippolis, Domenico; Wang, Li; Xiao, Yun-Feng
2017-07-01
A deformed dielectric microcavity is used as an experimental platform for the analysis of the statistics of chaotic resonances, in the perspective of testing fractal Weyl laws at optical frequencies. In order to surmount the difficulties that arise from reading strongly overlapping spectra, we exploit the mixed nature of the phase space at hand, and only count the high-Q whispering-gallery modes (WGMs) directly. That enables us to draw statistical information on the more lossy chaotic resonances, coupled to the high-Q regular modes via dynamical tunneling. Three different models [classical, Random-Matrix-Theory (RMT) based, semiclassical] to interpret the experimental data are discussed. On the basis of least-squares analysis, theoretical estimates of Ehrenfest time, and independent measurements, we find that a semiclassically modified RMT-based expression best describes the experiment in all its realizations, particularly when the resonator is coupled to visible light, while RMT alone still works quite well in the infrared. In this work we reexamine and substantially extend the results of a short paper published earlier [L. Wang et al., Phys. Rev. E 93, 040201(R) (2016), 10.1103/PhysRevE.93.040201].
Detecting cell death with optical coherence tomography and envelope statistics
NASA Astrophysics Data System (ADS)
Farhat, Golnaz; Yang, Victor X. D.; Czarnota, Gregory J.; Kolios, Michael C.
2011-02-01
Currently no standard clinical or preclinical noninvasive method exists to monitor cell death based on morphological changes at the cellular level. In our past work we have demonstrated that quantitative high frequency ultrasound imaging can detect cell death in vitro and in vivo. In this study we apply quantitative methods previously used with high frequency ultrasound to optical coherence tomography (OCT) to detect cell death. The ultimate goal of this work is to use these methods for optically-based clinical and preclinical cancer treatment monitoring. Optical coherence tomography data were acquired from acute myeloid leukemia cells undergoing three modes of cell death. Significant increases in integrated backscatter were observed for cells undergoing apoptosis and mitotic arrest, while necrotic cells induced a decrease. These changes appear to be linked to structural changes observed in histology obtained from the cell samples. Signal envelope statistics were analyzed from fittings of the generalized gamma distribution to histograms of envelope intensities. The parameters from this distribution demonstrated sensitivities to morphological changes in the cell samples. These results indicate that OCT integrated backscatter and first order envelope statistics can be used to detect and potentially differentiate between modes of cell death in vitro.
Ordering statistics of four random walkers on a line
NASA Astrophysics Data System (ADS)
Helenbrook, Brian; ben-Avraham, Daniel
2018-05-01
We study the ordering statistics of four random walkers on the line, obtaining a much improved estimate for the long-time decay exponent of the probability that a particle leads to time t , Plead(t ) ˜t-0.91287850 , and that a particle lags to time t (never assumes the lead), Plag(t ) ˜t-0.30763604 . Exponents of several other ordering statistics for N =4 walkers are obtained to eight-digit accuracy as well. The subtle correlations between n walkers that lag jointly, out of a field of N , are discussed: for N =3 there are no correlations and Plead(t ) ˜Plag(t) 2 . In contrast, our results rule out the possibility that Plead(t ) ˜Plag(t) 3 for N =4 , although the correlations in this borderline case are tiny.
Higher-Order Statistical Correlations and Mutual Information Among Particles in a Quantum Well
NASA Astrophysics Data System (ADS)
Yépez, V. S.; Sagar, R. P.; Laguna, H. G.
2017-12-01
The influence of wave function symmetry on statistical correlation is studied for the case of three non-interacting spin-free quantum particles in a unidimensional box, in position and in momentum space. Higher-order statistical correlations occurring among the three particles in this quantum system is quantified via higher-order mutual information and compared to the correlation between pairs of variables in this model, and to the correlation in the two-particle system. The results for the higher-order mutual information show that there are states where the symmetric wave functions are more correlated than the antisymmetric ones with same quantum numbers. This holds in position as well as in momentum space. This behavior is opposite to that observed for the correlation between pairs of variables in this model, and the two-particle system, where the antisymmetric wave functions are in general more correlated. These results are also consistent with those observed in a system of three uncoupled oscillators. The use of higher-order mutual information as a correlation measure, is monitored and examined by considering a superposition of states or systems with two Slater determinants.
NASA Astrophysics Data System (ADS)
Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.
2017-05-01
The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.
NASA Astrophysics Data System (ADS)
Cyganek, Boguslaw; Smolka, Bogdan
2015-02-01
In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.
Nicolopoulou, E P; Ztoupis, I N; Karabetsos, E; Gonos, I F; Stathopulos, I A
2015-04-01
The second round of an interlaboratory comparison scheme on radio frequency electromagnetic field measurements has been conducted in order to evaluate the overall performance of laboratories that perform measurements in the vicinity of mobile phone base stations and broadcast antenna facilities. The participants recorded the electric field strength produced by two high frequency signal generators inside an anechoic chamber in three measurement scenarios with the antennas transmitting each time different signals at the FM, VHF, UHF and GSM frequency bands. In each measurement scenario, the participants also used their measurements in order to calculate the relative exposure ratios. The results were evaluated in each test level calculating performance statistics (z-scores and En numbers). Subsequently, possible sources of errors for each participating laboratory were discussed, and the overall evaluation of their performances was determined by using an aggregated performance statistic. A comparison between the two rounds proves the necessity of the scheme. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
UniEnt: uniform entropy model for the dynamics of a neuronal population
NASA Astrophysics Data System (ADS)
Hernandez Lahme, Damian; Nemenman, Ilya
Sensory information and motor responses are encoded in the brain in a collective spiking activity of a large number of neurons. Understanding the neural code requires inferring statistical properties of such collective dynamics from multicellular neurophysiological recordings. Questions of whether synchronous activity or silence of multiple neurons carries information about the stimuli or the motor responses are especially interesting. Unfortunately, detection of such high order statistical interactions from data is especially challenging due to the exponentially large dimensionality of the state space of neural collectives. Here we present UniEnt, a method for the inference of strengths of multivariate neural interaction patterns. The method is based on the Bayesian prior that makes no assumptions (uniform a priori expectations) about the value of the entropy of the observed multivariate neural activity, in contrast to popular approaches that maximize this entropy. We then study previously published multi-electrode recordings data from salamander retina, exposing the relevance of higher order neural interaction patterns for information encoding in this system. This work was supported in part by Grants JSMF/220020321 and NSF/IOS/1208126.
Shilling Attacks Detection in Recommender Systems Based on Target Item Analysis
Zhou, Wei; Wen, Junhao; Koh, Yun Sing; Xiong, Qingyu; Gao, Min; Dobbie, Gillian; Alam, Shafiq
2015-01-01
Recommender systems are highly vulnerable to shilling attacks, both by individuals and groups. Attackers who introduce biased ratings in order to affect recommendations, have been shown to negatively affect collaborative filtering (CF) algorithms. Previous research focuses only on the differences between genuine profiles and attack profiles, ignoring the group characteristics in attack profiles. In this paper, we study the use of statistical metrics to detect rating patterns of attackers and group characteristics in attack profiles. Another question is that most existing detecting methods are model specific. Two metrics, Rating Deviation from Mean Agreement (RDMA) and Degree of Similarity with Top Neighbors (DegSim), are used for analyzing rating patterns between malicious profiles and genuine profiles in attack models. Building upon this, we also propose and evaluate a detection structure called RD-TIA for detecting shilling attacks in recommender systems using a statistical approach. In order to detect more complicated attack models, we propose a novel metric called DegSim’ based on DegSim. The experimental results show that our detection model based on target item analysis is an effective approach for detecting shilling attacks. PMID:26222882
Linear and Order Statistics Combiners for Pattern Classification
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)
2001-01-01
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
Statistical scaling of pore-scale Lagrangian velocities in natural porous media.
Siena, M; Guadagnini, A; Riva, M; Bijeljic, B; Pereira Nunes, J P; Blunt, M J
2014-08-01
We investigate the scaling behavior of sample statistics of pore-scale Lagrangian velocities in two different rock samples, Bentheimer sandstone and Estaillades limestone. The samples are imaged using x-ray computer tomography with micron-scale resolution. The scaling analysis relies on the study of the way qth-order sample structure functions (statistical moments of order q of absolute increments) of Lagrangian velocities depend on separation distances, or lags, traveled along the mean flow direction. In the sandstone block, sample structure functions of all orders exhibit a power-law scaling within a clearly identifiable intermediate range of lags. Sample structure functions associated with the limestone block display two diverse power-law regimes, which we infer to be related to two overlapping spatially correlated structures. In both rocks and for all orders q, we observe linear relationships between logarithmic structure functions of successive orders at all lags (a phenomenon that is typically known as extended power scaling, or extended self-similarity). The scaling behavior of Lagrangian velocities is compared with the one exhibited by porosity and specific surface area, which constitute two key pore-scale geometric observables. The statistical scaling of the local velocity field reflects the behavior of these geometric observables, with the occurrence of power-law-scaling regimes within the same range of lags for sample structure functions of Lagrangian velocity, porosity, and specific surface area.
Optimal choice of word length when comparing two Markov sequences using a χ 2-statistic.
Bai, Xin; Tang, Kujin; Ren, Jie; Waterman, Michael; Sun, Fengzhu
2017-10-03
Alignment-free sequence comparison using counts of word patterns (grams, k-tuples) has become an active research topic due to the large amount of sequence data from the new sequencing technologies. Genome sequences are frequently modelled by Markov chains and the likelihood ratio test or the corresponding approximate χ 2 -statistic has been suggested to compare two sequences. However, it is not known how to best choose the word length k in such studies. We develop an optimal strategy to choose k by maximizing the statistical power of detecting differences between two sequences. Let the orders of the Markov chains for the two sequences be r 1 and r 2 , respectively. We show through both simulations and theoretical studies that the optimal k= max(r 1 ,r 2 )+1 for both long sequences and next generation sequencing (NGS) read data. The orders of the Markov chains may be unknown and several methods have been developed to estimate the orders of Markov chains based on both long sequences and NGS reads. We study the power loss of the statistics when the estimated orders are used. It is shown that the power loss is minimal for some of the estimators of the orders of Markov chains. Our studies provide guidelines on choosing the optimal word length for the comparison of Markov sequences.
NASA Astrophysics Data System (ADS)
Behrendt, A.; Wulfmeyer, V.; Hammann, E.; Muppa, S. K.; Pal, S.
2014-11-01
The rotational Raman lidar of the University of Hohenheim (UHOH) measures atmospheric temperature profiles during daytime with high resolution (10 s, 109 m). The data contain low noise errors even in daytime due to the use of strong UV laser light (355 nm, 10 W, 50 Hz) and a very efficient interference-filter-based polychromator. In this paper, we present the first profiling of the second- to forth-order moments of turbulent temperature fluctuations as well as of skewness and kurtosis in the convective boundary layer (CBL) including the interfacial layer (IL). The results demonstrate that the UHOH RRL resolves the vertical structure of these moments. The data set which is used for this case study was collected in western Germany (50°53'50.56'' N, 6°27'50.39'' E, 110 m a.s.l.) within one hour around local noon on 24 April 2013 during the Intensive Observations Period (IOP) 6 of the HD(CP)2 Observational Prototype Experiment (HOPE), which is embedded in the German project HD(CP)2 (High-Definition Clouds and Precipitation for advancing Climate Prediction). First, we investigated profiles of the noise variance and compared it with estimates of the statistical temperature measurement uncertainty Δ T based on Poisson statistics. The agreement confirms that photon count numbers obtained from extrapolated analog signal intensities provide a lower estimate of the statistical errors. The total statistical uncertainty of a 20 min temperature measurement is lower than 0.1 K up to 1050 m a.g.l. at noontime; even for single 10 s temperature profiles, it is smaller than 1 K up to 1000 m a.g.l.. Then we confirmed by autocovariance and spectral analyses of the atmospheric temperature fluctuations that a temporal resolution of 10 s was sufficient to resolve the turbulence down to the inertial subrange. This is also indicated by the profile of the integral scale of the temperature fluctuations, which was in the range of 40 to 120 s in the CBL. Analyzing then profiles of the second-, third-, and forth-order moments, we found the largest values of all moments in the IL around the mean top of the CBL which was located at 1230 m a.g.l. The maximum of the variance profile in the IL was 0.40 K2 with 0.06 and 0.08 K2 for the sampling error and noise error, respectively. The third-order moment was not significantly different from zero inside the CBL but showed a negative peak in the IL with a minimum of -0.72 K3 and values of 0.06 and 0.14 K3 for the sampling and noise errors, respectively. The forth-order moment and kurtosis values throughout the CBL were quasi-normal.
DOT National Transportation Integrated Search
1997-10-01
In order to provide waterborne commerce information as soon as possible, the Waterborne Commerce Statistics Center (WCSC) has prepared this summary document of estimated waterborne commerce statistics for calendar year 1996. The foreign import and ex...
DOT National Transportation Integrated Search
1999-07-30
In order to provide waterborne commerce information as soon as possible, the Waterborne Commerce Statistics Center(WCSC) has prepared this summary document of estimated waterborne commerce statistics for calendar year 1998. The foreign import and exp...
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
Martinez, G T; Rosenauer, A; De Backer, A; Verbeeck, J; Van Aert, S
2014-02-01
High angle annular dark field scanning transmission electron microscopy (HAADF STEM) images provide sample information which is sensitive to the chemical composition. The image intensities indeed scale with the mean atomic number Z. To some extent, chemically different atomic column types can therefore be visually distinguished. However, in order to quantify the atomic column composition with high accuracy and precision, model-based methods are necessary. Therefore, an empirical incoherent parametric imaging model can be used of which the unknown parameters are determined using statistical parameter estimation theory (Van Aert et al., 2009, [1]). In this paper, it will be shown how this method can be combined with frozen lattice multislice simulations in order to evolve from a relative toward an absolute quantification of the composition of single atomic columns with mixed atom types. Furthermore, the validity of the model assumptions are explored and discussed. © 2013 Published by Elsevier B.V. All rights reserved.
Effect of simulated lunar impact on the survival of bacterial spores.
NASA Technical Reports Server (NTRS)
Whitfield, O.; Merek, E. L.; Oyama, V. I.
1973-01-01
In order to test the effect of impact on organisms, the survival of bacterial spores after being propelled at high velocity in Pyrex and plastic beads into crushed basalt was measured. The beads were fired into sterilized canisters by both a conventional powder and a light gas gun. Results indicate that at the minimum (2.4 km/sec) lunar capture velocity, the number of colony forming units (CFUs) decreased by five orders of magnitude, and at 5.5 km/sec, statistically a more probable capture velocity, no CFUs were found. The decrease in CFUs observed with increasing velocity indicates that the spores were most probably killed by the impact.
Nanophase and Composite Optical Materials
NASA Technical Reports Server (NTRS)
2003-01-01
This talk will focus on accomplishments, current developments, and future directions of our work on composite optical materials for microgravity science and space exploration. This research spans the order parameter from quasi-fractal structures such as sol-gels and other aggregated or porous media, to statistically random cluster media such as metal colloids, to highly ordered materials such as layered media and photonic bandgap materials. The common focus is on flexible materials that can be used to produce composite or artificial materials with superior optical properties that could not be achieved with homogeneous materials. Applications of this work to NASA exploration goals such as terraforming, biosensors, solar sails, solar cells, and vehicle health monitoring, will be discussed.
Random walker in temporally deforming higher-order potential forces observed in a financial crisis.
Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako
2009-11-01
Basic peculiarities of market price fluctuations are known to be well described by a recently developed random-walk model in a temporally deforming quadratic potential force whose center is given by a moving average of past price traces [M. Takayasu, T. Mizuno, and H. Takayasu, Physica A 370, 91 (2006)]. By analyzing high-frequency financial time series of exceptional events, such as bubbles and crashes, we confirm the appearance of higher-order potential force in the markets. We show statistical significance of its existence by applying the information criterion. This time series analysis is expected to be applied widely for detecting a nonstationary symptom in random phenomena.
NASA Astrophysics Data System (ADS)
Xu, Kaixuan; Wang, Jun
2017-02-01
In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model.
Pump RIN-induced impairments in unrepeatered transmission systems using distributed Raman amplifier.
Cheng, Jingchi; Tang, Ming; Lau, Alan Pak Tao; Lu, Chao; Wang, Liang; Dong, Zhenhua; Bilal, Syed Muhammad; Fu, Songnian; Shum, Perry Ping; Liu, Deming
2015-05-04
High spectral efficiency modulation format based unrepeatered transmission systems using distributed Raman amplifier (DRA) have attracted much attention recently. To enhance the reach and optimize system performance, careful design of DRA is required based on the analysis of various types of impairments and their balance. In this paper, we study various pump RIN induced distortions on high spectral efficiency modulation formats. The vector theory of both 1st and higher-order stimulated Raman scattering (SRS) effect using Jones-matrix formalism is presented. The pump RIN will induce three types of distortion on high spectral efficiency signals: intensity noise stemming from SRS, phase noise stemming from cross phase modulation (XPM), and polarization crosstalk stemming from cross polarization modulation (XPolM). An analytical model for the statistical property of relative phase noise (RPN) in higher order DRA without dealing with complex vector theory is derived. The impact of pump RIN induced impairments are analyzed in polarization-multiplexed (PM)-QPSK and PM-16QAM-based unrepeatered systems simulations using 1st, 2nd and 3rd-order forward pumped Raman amplifier. It is shown that at realistic RIN levels, negligible impairments will be induced to PM-QPSK signals in 1st and 2nd order DRA, while non-negligible impairments will occur in 3rd order case. PM-16QAM signals suffer more penalties compared to PM-QPSK with the same on-off gain where both 2nd and 3rd order DRA will cause non-negligible performance degradations. We also investigate the performance of digital signal processing (DSP) algorithms to mitigate such impairments.
Statistical uncertainties of a chiral interaction at next-to-next-to leading order
Ekström, A.; Carlsson, B. D.; Wendt, K. A.; ...
2015-02-05
In this paper, we have quantified the statistical uncertainties of the low-energy coupling-constants (LECs) of an optimized nucleon–nucleon interaction from chiral effective field theory at next-to-next-to-leading order. Finally, in addition, we have propagated the impact of the uncertainties of the LECs to two-nucleon scattering phase shifts, effective range parameters, and deuteron observables.
NASA Astrophysics Data System (ADS)
Zaccaria, A.; Cristelli, M.; Alfi, V.; Ciulla, F.; Pietronero, L.
2010-06-01
We show that the statistics of spreads in real order books is characterized by an intrinsic asymmetry due to discreteness effects for even or odd values of the spread. An analysis of data from the New York Stock Exchange (NYSE) order book points out that traders’ strategies contribute to this asymmetry. We also investigate this phenomenon in the framework of a microscopic model and, by introducing a nonuniform deposition mechanism for limit orders, we are able to quantitatively reproduce the asymmetry found in the experimental data. Simulations of our model also show a realistic dynamics with a sort of intermittent behavior characterized by long periods in which the order book is compact and liquid interrupted by volatile configurations. The order placement strategies produce a nontrivial behavior of the spread relaxation dynamics which is similar to the one observed in real markets.
Suzuki, Takakuni; Griffin, Sarah A; Samuel, Douglas B
2017-04-01
Several studies have shown structural and statistical similarities between the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) alternative personality disorder model and the Five-Factor Model (FFM). However, no study to date has evaluated the nomological network similarities between the two models. The relations of the Revised NEO Personality Inventory (NEO PI-R) and the Personality Inventory for DSM-5 (PID-5) with relevant criterion variables were examined in a sample of 336 undergraduate students (M age = 19.4; 59.8% female). The resulting profiles for each instrument were statistically compared for similarity. Four of the five domains of the two models have highly similar nomological networks, with the exception being FFM Openness to Experience and PID-5 Psychoticism. Further probing of that pair suggested that the NEO PI-R domain scores obscured meaningful similarity between PID-5 Psychoticism and specific aspects and lower-order facets of Openness. The results support the notion that the DSM-5 alternative personality disorder model trait domains represent variants of the FFM domains. Similarities of Openness and Psychoticism domains were supported when the lower-order aspects and facets of Openness domain were considered. The findings support the view that the DSM-5 trait model represents an instantiation of the FFM. © 2015 Wiley Periodicals, Inc.
Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato
2017-01-27
Previous neural studies have supported the hypothesis that statistical learning mechanisms are used broadly across different domains such as language and music. However, these studies have only investigated a single aspect of statistical learning at a time, such as recognizing word boundaries or learning word order patterns. In this study, we neutrally investigated how the two levels of statistical learning for recognizing word boundaries and word ordering could be reflected in neuromagnetic responses and how acquired statistical knowledge is reorganised when the syntactic rules are revised. Neuromagnetic responses to the Japanese-vowel sequence (a, e, i, o, and u), presented every .45s, were recorded from 14 right-handed Japanese participants. The vowel order was constrained by a Markov stochastic model such that five nonsense words (aue, eao, iea, oiu, and uoi) were chained with an either-or rule: the probability of the forthcoming word was statistically defined (80% for one word; 20% for the other word) by the most recent two words. All of the word transition probabilities (80% and 20%) were switched in the middle of the sequence. In the first and second quarters of the sequence, the neuromagnetic responses to the words that appeared with higher transitional probability were significantly reduced compared with those that appeared with a lower transitional probability. After switching the word transition probabilities, the response reduction was replicated in the last quarter of the sequence. The responses to the final vowels in the words were significantly reduced compared with those to the initial vowels in the last quarter of the sequence. The results suggest that both within-word and between-word statistical learning are reflected in neural responses. The present study supports the hypothesis that listeners learn larger structures such as phrases first, and they subsequently extract smaller structures, such as words, from the learned phrases. The present study provides the first neurophysiological evidence that the correction of statistical knowledge requires more time than the acquisition of new statistical knowledge. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Juven C.; Wen, Xiao-Gang
2015-01-01
String and particle braiding statistics are examined in a class of topological orders described by discrete gauge theories with a gauge group G and a 4-cocycle twist ω4 of G 's cohomology group H4(G ,R /Z ) in three-dimensional space and one-dimensional time (3 +1 D ) . We establish the topological spin and the spin-statistics relation for the closed strings and their multistring braiding statistics. The 3 +1 D twisted gauge theory can be characterized by a representation of a modular transformation group, SL (3 ,Z ) . We express the SL (3 ,Z ) generators Sx y z and Tx y in terms of the gauge group G and the 4-cocycle ω4. As we compactify one of the spatial directions z into a compact circle with a gauge flux b inserted, we can use the generators Sx y and Tx y of an SL (2 ,Z ) subgroup to study the dimensional reduction of the 3D topological order C3 D to a direct sum of degenerate states of 2D topological orders Cb2 D in different flux b sectors: C3 D=⊕bCb2 D . The 2D topological orders Cb2 D are described by 2D gauge theories of the group G twisted by the 3-cocycle ω3 (b ), dimensionally reduced from the 4-cocycle ω4. We show that the SL (2 ,Z ) generators, Sx y and Tx y, fully encode a particular type of three-string braiding statistics with a pattern that is the connected sum of two Hopf links. With certain 4-cocycle twists, we discover that, by threading a third string through two-string unlink into a three-string Hopf-link configuration, Abelian two-string braiding statistics is promoted to non-Abelian three-string braiding statistics.
Huang, Chun-Ta; Chuang, Yu-Chung; Tsai, Yi-Ju; Ko, Wen-Je; Yu, Chong-Jen
2016-01-01
Severe sepsis is a potentially deadly illness and always requires intensive care. Do-not-resuscitate (DNR) orders remain a debated issue in critical care and limited data exist about its impact on care of septic patients, particularly in East Asia. We sought to assess outcome of severe sepsis patients with regard to DNR status in Taiwan. A retrospective cohort study was conducted in intensive care units (ICUs) between 2008 and 2010. All severe sepsis patients were included for analysis. Primary outcome was association between DNR orders and ICU mortality. Volume of interventions was used as proxy indicator to indicate aggressiveness of care. Sixty-seven (9.4%) of 712 patients had DNR orders on ICU admission, and these patients were older and had higher disease severity compared with patients without DNR orders. Notably, DNR patients experienced high ICU mortality (90%). Multivariate analysis revealed that the presence of DNR orders was independently associated with ICU mortality (odds ratio: 6.13; 95% confidence interval: 2.66-14.10). In propensity score-matched cohort, ICU mortality rate (91%) in the DNR group was statistically higher than that (62%) in the non-DNR group (p <0.001). Regarding ICU interventions, arterial and central venous catheterization were more commonly used in DNR patients than in non-DNR patients. From the Asian perspective, septic patients placed on DNR orders on ICU admission had exceptionally high mortality. In contrast to Western reports, DNR patients received more ICU interventions, reflecting more aggressive approach to dealing with this patient population. The findings in some ways reflect differences between East and West cultures and suggest that DNR status is an important confounder in ICU studies involving severely septic patients.
Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto
2014-01-01
Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model. PMID:24634645
Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto
2014-01-01
Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.
Nonequilibrium quantum field dynamics from the two-particle-irreducible effective action
NASA Astrophysics Data System (ADS)
Laurie, Nathan S.
The two-particle-irreducible effective action offers a powerful approach to the study of quantum field dynamics far from equilibrium. Recent and upcoming heavy ion collision experiments motivate the study of such nonequilibrium dynamics in an expanding space-time background. For the O(N) model I derive exact, causal evolution equations for the statistical and spectral functions in a longitudinally expanding system. It is followed by an investigation into how the expansion affects the prospect of the system reaching equilibrium. Results are obtained in 1+1 dimensions at next-to- leading order in loop- and 1/N-expansions of the 2PI effective action. I focus on the evolution of the statistical function from highly nonequilibrium initial conditions, presenting a detailed analysis of early, intermediate and late-time dynamics. It is found that dynamics at very early times is attracted by a nonthermal fixed point of the mean field equations, after which interactions attempt to drive the system to equilibrium. The competition between the interactions and the expansion is eventually won by the expansion, with so-called freeze-out emerging naturally in this description. In order to investigate the convergence of the 2PI-1/N expansion in the 0(N) model, I compare results obtained numerically in 1+1 dimensions at leading, next- to-leading and next-to-next-to-leading order in 1/N. Convergence with increasing N, and also with decreasing coupling are discussed. A comparison is also made in the classical statistical field theory limit, where exact numerical results are available. I focus on early-time dynamics and quasi-particle properties far from equilibrium and observe rapid effective convergence already for moderate values of 1/N or the coupling strength.
May, Michael R; Moore, Brian R
2016-11-01
Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
May, Michael R.; Moore, Brian R.
2016-01-01
Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified ≈30% of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers—in order to clarify whether these methods can make reliable inferences from empirical datasets—and to theoretical biologists—in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.] PMID:27037081
Aslanides, Ioannis M.; Padroni, Sara; Arba-Mosquera, Samuel
2012-01-01
Purpose To evaluate mid-term refractive outcomes and higher order aberrations of aspheric PRK for low, moderate and high myopia and myopic astigmatism with the AMARIS excimer laser system (SCHWIND eye-tech-solutions GmbH, Kleinostheim, Germany). Methods This prospective longitudinal study evaluated 80 eyes of 40 subjects who underwent aspheric PRK. Manifest refractive spherical equivalent (MRSE) of up to −10.00 diopters (D) at the spectacle plane with cylinder up to 3.25 was treated. Refractive outcomes and corneal wavefront data (6 mm pupil to the 7th Zernike order) were evaluated out to 2 years postoperatively. Statistical significance was indicated by P < 0.05. Results The mean manifest spherical equivalent refraction (MRSE) was −4.77 ± 2.45 (range, −10.00 D to −0.75 D) preoperatively and −0.12 ± 0.35 D (range, −1.87 D to +0.75 D) postoperatively (P < 0.0001). Postoperatively, 91% (73/80) of eyes had an MRSE within ±0.50 D of the attempted. No eyes lost one or more lines of corrected distance visual acuity (CDVA) and CDVA increased by one or more lines in 26% (21/80) of eyes. Corneal trefoil and corneal higher order aberration root mean square did not statistically change postoperatively compared to preoperatively (P > 0.05, both cases). There was a statistical increase in postoperative coma (+0.12 μm) and spherical aberration (+0.14 μm) compared to preoperatively (P < 0.001, both cases). Conclusion Aspheric PRK provides excellent visual and refractive outcomes with induction in individual corneal aberrations but not overall corneal aberrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pingenot, J; Rieben, R; White, D
2005-10-31
We present a computational study of signal propagation and attenuation of a 200 MHz planar loop antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The numerical technique is first verified against theoretical results for a planar loop antenna in a smooth lossy cave. The simulation is then performed for a series of random rough surface meshes in ordermore » to generate statistical data for the propagation and attenuation properties of the antenna in a cave environment. Results for the mean and variance of the power spectral density of the electric field are presented and discussed.« less
Accuracy assessment for a multi-parameter optical calliper in on line automotive applications
NASA Astrophysics Data System (ADS)
D'Emilia, G.; Di Gasbarro, D.; Gaspari, A.; Natale, E.
2017-08-01
In this work, a methodological approach based on the evaluation of the measurement uncertainty is applied to an experimental test case, related to the automotive sector. The uncertainty model for different measurement procedures of a high-accuracy optical gauge is discussed in order to individuate the best measuring performances of the system for on-line applications and when the measurement requirements are becoming more stringent. In particular, with reference to the industrial production and control strategies of high-performing turbochargers, two uncertainty models are proposed, discussed and compared, to be used by the optical calliper. Models are based on an integrated approach between measurement methods and production best practices to emphasize their mutual coherence. The paper shows the possible advantages deriving from the considerations that the measurement uncertainty modelling provides, in order to keep control of the uncertainty propagation on all the indirect measurements useful for production statistical control, on which basing further improvements.
NASA Astrophysics Data System (ADS)
van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis
2014-11-01
In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.
NASA Astrophysics Data System (ADS)
Ng, C. S.; Bhattacharjee, A.
1996-08-01
A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.
Counting statistics for genetic switches based on effective interaction approximation
NASA Astrophysics Data System (ADS)
Ohkubo, Jun
2012-09-01
Applicability of counting statistics for a system with an infinite number of states is investigated. The counting statistics has been studied a lot for a system with a finite number of states. While it is possible to use the scheme in order to count specific transitions in a system with an infinite number of states in principle, we have non-closed equations in general. A simple genetic switch can be described by a master equation with an infinite number of states, and we use the counting statistics in order to count the number of transitions from inactive to active states in the gene. To avoid having the non-closed equations, an effective interaction approximation is employed. As a result, it is shown that the switching problem can be treated as a simple two-state model approximately, which immediately indicates that the switching obeys non-Poisson statistics.
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Steinetz, B. M.; Zaretsky, E. V.; Athavale, M. M.; Przekwas, A. J.
2004-01-01
The issues and components supporting the engine power stream are reviewed. It is essential that companies pay close attention to engine sealing issues, particularly on the high-pressure spool or high-pressure pumps. Small changes in these systems are reflected throughout the entire engine. Although cavity, platform, and tip sealing are complex and have a significant effect on component and engine performance, computational tools (e.g., NASA-developed INDSEAL, SCISEAL, and ADPAC) are available to help guide the designer and the experimenter. Gas turbine engine and rocket engine externals must all function efficiently with a high degree of reliability in order for the engine to run but often receive little attention until they malfunction. Within the open literature statistically significant data for critical engine components are virtually nonexistent; the classic approach is deterministic. Studies show that variations with loading can have a significant effect on component performance and life. Without validation data they are just studies. These variations and deficits in statistical databases require immediate attention.
Non-parametric early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.
2008-03-01
The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.
Dynamic Encoding of Speech Sequence Probability in Human Temporal Cortex
Leonard, Matthew K.; Bouchard, Kristofer E.; Tang, Claire
2015-01-01
Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. PMID:25948269
Johnson, Gregory R.; Kangas, Joshua D.; Dovzhenko, Alexander; Trojok, Rüdiger; Voigt, Karsten; Majarian, Timothy D.; Palme, Klaus; Murphy, Robert F.
2017-01-01
Quantitative image analysis procedures are necessary for the automated discovery of effects of drug treatment in large collections of fluorescent micrographs. When compared to their mammalian counterparts, the effects of drug conditions on protein localization in plant species are poorly understood and underexplored. To investigate this relationship, we generated a large collection of images of single plant cells after various drug treatments. For this, protoplasts were isolated from six transgenic lines of A. thaliana expressing fluorescently tagged proteins. Nine drugs at three concentrations were applied to protoplast cultures followed by automated image acquisition. For image analysis, we developed a cell segmentation protocol for detecting drug effects using a Hough-transform based region of interest detector and a novel cross-channel texture feature descriptor. In order to determine treatment effects, we summarized differences between treated and untreated experiments with an L1 Cramér-von Mises statistic. The distribution of these statistics across all pairs of treated and untreated replicates was compared to the variation within control replicates to determine the statistical significance of observed effects. Using this pipeline, we report the dose dependent drug effects in the first high-content Arabidopsis thaliana drug screen of its kind. These results can function as a baseline for comparison to other protein organization modeling approaches in plant cells. PMID:28245335
Resolving phase stability in the Ti-O binary with first-principles statistical mechanics methods
NASA Astrophysics Data System (ADS)
Gunda, N. S. Harsha; Puchala, Brian; Van der Ven, Anton
2018-03-01
The Ti-O system consists of a multitude of stable and metastable oxides that are used in wide ranging applications. In this work we investigate phase stability in the Ti-O binary from first principles. We perform a systematic search for ground state structures as a function of oxygen concentration by considering oxygen-vacancy and/or titanium-vacancy orderings over four parent crystal structures: (i) hcp Ti, (ii) ω -Ti, (iii) rocksalt, and (iv) hcp oxygen containing interstitial titanium. We explore phase stability at finite temperature using cluster expansion Hamiltonians and Monte Carlo simulations. The calculations predict a high oxygen solubility in hcp Ti and the stability of suboxide phases that undergo order-disorder transitions upon heating. Vacancy ordered rocksalt phases are also predicted at low temperature that disorder to form an extended solid solution at high temperatures. Predicted stable and metastable phase diagrams are qualitatively consistent with experimental observations, however, important discrepancies are revealed between first-principles density functional theory predictions of phase stability and the current understanding of phase stability in this system.
Isotropic–Nematic Phase Transitions in Gravitational Systems. II. Higher Order Multipoles
NASA Astrophysics Data System (ADS)
Takács, Ádám; Kocsis, Bence
2018-04-01
The gravitational interaction among bodies orbiting in a spherical potential leads to the rapid relaxation of the orbital planes’ distribution, a process called vector resonant relaxation. We examine the statistical equilibrium of this process for a system of bodies with similar semimajor axes and eccentricities. We extend the previous model of Roupas et al. by accounting for the multipole moments beyond the quadrupole, which dominate the interaction for radially overlapping orbits. Nevertheless, we find no qualitative differences between the behavior of the system with respect to the model restricted to the quadrupole interaction. The equilibrium distribution resembles a counterrotating disk at low temperature and a spherical structure at high temperature. The system exhibits a first-order phase transition between the disk and the spherical phase in the canonical ensemble if the total angular momentum is below a critical value. We find that the phase transition erases the high-order multipoles, i.e., small-scale structure in angular momentum space, most efficiently. The system admits a maximum entropy and a maximum energy, which lead to the existence of negative temperature equilibria.
A two-parameter design storm for Mediterranean convective rainfall
NASA Astrophysics Data System (ADS)
García-Bartual, Rafael; Andrés-Doménech, Ignacio
2017-05-01
The following research explores the feasibility of building effective design storms for extreme hydrological regimes, such as the one which characterizes the rainfall regime of the east and south-east of the Iberian Peninsula, without employing intensity-duration-frequency (IDF) curves as a starting point. Nowadays, after decades of functioning hydrological automatic networks, there is an abundance of high-resolution rainfall data with a reasonable statistic representation, which enable the direct research of temporal patterns and inner structures of rainfall events at a given geographic location, with the aim of establishing a statistical synthesis directly based on those observed patterns. The authors propose a temporal design storm defined in analytical terms, through a two-parameter gamma-type function. The two parameters are directly estimated from 73 independent storms identified from rainfall records of high temporal resolution in Valencia (Spain). All the relevant analytical properties derived from that function are developed in order to use this storm in real applications. In particular, in order to assign a probability to the design storm (return period), an auxiliary variable combining maximum intensity and total cumulated rainfall is introduced. As a result, for a given return period, a set of three storms with different duration, depth and peak intensity are defined. The consistency of the results is verified by means of comparison with the classic method of alternating blocks based on an IDF curve, for the above mentioned study case.
Origin of weak lensing convergence peaks
NASA Astrophysics Data System (ADS)
Liu, Jia; Haiman, Zoltán
2016-08-01
Weak lensing convergence peaks are a promising tool to probe nonlinear structure evolution at late times, providing additional cosmological information beyond second-order statistics. Previous theoretical and observational studies have shown that the cosmological constraints on Ωm and σ8 are improved by a factor of up to ≈2 when peak counts and second-order statistics are combined, compared to using the latter alone. We study the origin of lensing peaks using observational data from the 154 deg2 Canada-France-Hawaii Telescope Lensing Survey. We found that while high peaks (with height κ >3.5 σκ , where σκ is the rms of the convergence κ ) are typically due to one single massive halo of ≈1 015M⊙ , low peaks (κ ≲σκ ) are associated with constellations of 2-8 smaller halos (≲1 013M⊙ ). In addition, halos responsible for forming low peaks are found to be significantly offset from the line of sight towards the peak center (impact parameter ≳ their virial radii), compared with ≈0.25 virial radii for halos linked with high peaks, hinting that low peaks are more immune to baryonic processes whose impact is confined to the inner regions of the dark matter halos. Our findings are in good agreement with results from the simulation work by Yang et al. [Phys. Rev. D 84, 043529 (2011)].
On Some Assumptions of the Null Hypothesis Statistical Testing
ERIC Educational Resources Information Center
Patriota, Alexandre Galvão
2017-01-01
Bayesian and classical statistical approaches are based on different types of logical principles. In order to avoid mistaken inferences and misguided interpretations, the practitioner must respect the inference rules embedded into each statistical method. Ignoring these principles leads to the paradoxical conclusions that the hypothesis…
APPLICATION OF STATISTICAL ENERGY ANALYSIS TO VIBRATIONS OF MULTI-PANEL STRUCTURES.
cylindrical shell are compared with predictions obtained from statistical energy analysis . Generally good agreement is observed. The flow of mechanical...the coefficients of proportionality between power flow and average modal energy difference, which one must know in order to apply statistical energy analysis . No
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H
2014-06-15
Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm inmore » a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.« less
NASA Astrophysics Data System (ADS)
William, Peter
In this dissertation several two dimensional statistical systems exhibiting discrete Z(n) symmetries are studied. For this purpose a newly developed algorithm to compute the partition function of these models exactly is utilized. The zeros of the partition function are examined in order to obtain information about the observable quantities at the critical point. This occurs in the form of critical exponents of the order parameters which characterize phenomena at the critical point. The correlation length exponent is found to agree very well with those computed from strong coupling expansions for the mass gap and with Monte Carlo results. In Feynman's path integral formalism the partition function of a statistical system can be related to the vacuum expectation value of the time ordered product of the observable quantities of the corresponding field theoretic model. Hence a generalization of ordinary scale invariance in the form of conformal invariance is focussed upon. This principle is very suitably applicable, in the case of two dimensional statistical models undergoing second order phase transitions at criticality. The conformal anomaly specifies the universality class to which these models belong. From an evaluation of the partition function, the free energy at criticality is computed, to determine the conformal anomaly of these models. The conformal anomaly for all the models considered here are in good agreement with the predicted values.
Brillouin scattering-induced rogue waves in self-pulsing fiber lasers
Hanzard, Pierre-Henry; Talbi, Mohamed; Mallek, Djouher; Kellou, Abdelhamid; Leblond, Hervé; Sanchez, François; Godin, Thomas; Hideur, Ammar
2017-01-01
We report the experimental observation of extreme instabilities in a self-pulsing fiber laser under the influence of stimulated Brillouin scattering (SBS). Specifically, we observe temporally localized structures with high intensities that can be referred to as rogue events through their statistical behaviour with highly-skewed intensity distributions. The emergence of these SBS-induced rogue waves is attributed to the interplay between laser operation and resonant Stokes orders. As this behaviour is not accounted for by existing models, we also present numerical simulations showing that such instabilities can be observed in chaotic laser operation. This study opens up new possibilities towards harnessing extreme events in highly-dissipative systems through adapted laser cavity configurations. PMID:28374840
Brillouin scattering-induced rogue waves in self-pulsing fiber lasers.
Hanzard, Pierre-Henry; Talbi, Mohamed; Mallek, Djouher; Kellou, Abdelhamid; Leblond, Hervé; Sanchez, François; Godin, Thomas; Hideur, Ammar
2017-04-04
We report the experimental observation of extreme instabilities in a self-pulsing fiber laser under the influence of stimulated Brillouin scattering (SBS). Specifically, we observe temporally localized structures with high intensities that can be referred to as rogue events through their statistical behaviour with highly-skewed intensity distributions. The emergence of these SBS-induced rogue waves is attributed to the interplay between laser operation and resonant Stokes orders. As this behaviour is not accounted for by existing models, we also present numerical simulations showing that such instabilities can be observed in chaotic laser operation. This study opens up new possibilities towards harnessing extreme events in highly-dissipative systems through adapted laser cavity configurations.
Detecting higher spin fields through statistical anisotropy in the CMB and galaxy power spectra
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Kehagias, Alex; Liguori, Michele; Riotto, Antonio; Shiraishi, Maresuke; Tansella, Vittorio
2018-01-01
Primordial inflation may represent the most powerful collider to test high-energy physics models. In this paper we study the impact on the inflationary power spectrum of the comoving curvature perturbation in the specific model where massive higher spin fields are rendered effectively massless during a de Sitter epoch through suitable couplings to the inflaton field. In particular, we show that such fields with spin s induce a distinctive statistical anisotropic signal on the power spectrum, in such a way that not only the usual g2 M-statistical anisotropy coefficients, but also higher-order ones (i.e., g4 M,g6 M,…,g(2 s -2 )M and g(2 s )M) are nonvanishing. We examine their imprints in the cosmic microwave background and galaxy power spectra. Our Fisher matrix forecasts indicate that the detectability of gL M depends very weakly on L : all coefficients could be detected in near future if their magnitudes are bigger than about 10-3.
Moorman, J. Randall; Delos, John B.; Flower, Abigail A.; Cao, Hanqing; Kovatchev, Boris P.; Richman, Joshua S.; Lake, Douglas E.
2014-01-01
We have applied principles of statistical signal processing and non-linear dynamics to analyze heart rate time series from premature newborn infants in order to assist in the early diagnosis of sepsis, a common and potentially deadly bacterial infection of the bloodstream. We began with the observation of reduced variability and transient decelerations in heart rate interval time series for hours up to days prior to clinical signs of illness. We find that measurements of standard deviation, sample asymmetry and sample entropy are highly related to imminent clinical illness. We developed multivariable statistical predictive models, and an interface to display the real-time results to clinicians. Using this approach, we have observed numerous cases in which incipient neonatal sepsis was diagnosed and treated without any clinical illness at all. This review focuses on the mathematical and statistical time series approaches used to detect these abnormal heart rate characteristics and present predictive monitoring information to the clinician. PMID:22026974
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
Evolution of massive stars in very young clusters and associations
NASA Technical Reports Server (NTRS)
Stothers, R. B.
1985-01-01
Statistics concerning the stellar content of young galactic clusters and associations which show well defined main sequence turnups have been analyzed in order to derive information about stellar evolution in high-mass galaxies. The analytical approach is semiempirical and uses natural spectroscopic groups of stars on the H-R diagram together with the stars' apparent magnitudes. The new approach does not depend on absolute luminosities and requires only the most basic elements of stellar evolution theory. The following conclusions are offered on the basis of the statistical analysis: (1) O-tupe main-sequence stars evolve to a spectral type of B1 during core hydrogen burning; (2) most O-type blue stragglers are newly formed massive stars burning core hydrogen; (3) supergiants lying redward of the main-sequence turnup are burning core helium; and most Wolf-Rayet stars are burning core helium and originally had masses greater than 30-40 solar mass. The statistics of the natural spectroscopic stars in young galactic clusters and associations are given in a table.
[Comment on] Statistical discrimination
NASA Astrophysics Data System (ADS)
Chinn, Douglas
In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.
Statistical process control using optimized neural networks: a case study.
Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid
2014-09-01
The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
A statistical approach to the brittle fracture of a multi-phase solid
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lua, Y. I.; Belytschko, T.
1991-01-01
A stochastic damage model is proposed to quantify the inherent statistical distribution of the fracture toughness of a brittle, multi-phase solid. The model, based on the macrocrack-microcrack interaction, incorporates uncertainties in locations and orientations of microcracks. Due to the high concentration of microcracks near the macro-tip, a higher order analysis based on traction boundary integral equations is formulated first for an arbitrary array of cracks. The effects of uncertainties in locations and orientations of microcracks at a macro-tip are analyzed quantitatively by using the boundary integral equations method in conjunction with the computer simulation of the random microcrack array. The short range interactions resulting from surrounding microcracks closet to the main crack tip are investigated. The effects of microcrack density parameter are also explored in the present study. The validity of the present model is demonstrated by comparing its statistical output with the Neville distribution function, which gives correct fits to sets of experimental data from multi-phase solids.
Lod scores for gene mapping in the presence of marker map uncertainty.
Stringham, H M; Boehnke, M
2001-07-01
Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.
Education Statistics Quarterly. Volume 6, Issue 3, 2004. NCES 2005-612
ERIC Educational Resources Information Center
National Center for Education Statistics, 2005
2005-01-01
The National Center for Education Statistics (NCES) fulfills a congressional mandate to collect and report "statistics and information showing the condition and progress of education in the United States and other nations in order to promote and accelerate the improvement of American education." The "Quarterly" offers a…
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
cyclostratigraphy, sequence stratigraphy and organic matter accumulation mechanism
NASA Astrophysics Data System (ADS)
Cong, F.; Li, J.
2016-12-01
The first member of Maokou Formation of Sichuan basin is composed of well preserved carbonate ramp couplets of limestone and marlstone/shale. It acts as one of the potential shale gas source rock, and is suitable for time-series analysis. We conducted time-series analysis to identify high-frequency sequences, reconstruct high-resolution sedimentation rate, estimate detailed primary productivity for the first time in the study intervals and discuss organic matter accumulation mechanism of source rock under sequence stratigraphic framework.Using the theory of cyclostratigraphy and sequence stratigraphy, the high-frequency sequences of one outcrop profile and one drilling well are identified. Two third-order sequences and eight fourth-order sequences are distinguished on outcrop profile based on the cycle stacking patterns. For drilling well, sequence boundary and four system tracts is distinguished by "integrated prediction error filter analysis" (INPEFA) of Gamma-ray logging data, and eight fourth-order sequences is identified by 405ka long eccentricity curve in depth domain which is quantified and filtered by integrated analysis of MTM spectral analysis, evolutive harmonic analysis (EHA), evolutive average spectral misfit (eASM) and band-pass filtering. It suggests that high-frequency sequences correlate well with Milankovitch orbital signals recorded in sediments, and it is applicable to use cyclostratigraphy theory in dividing high-frequency(4-6 orders) sequence stratigraphy.High-resolution sedimentation rate is reconstructed through the study interval by tracking the highly statistically significant short eccentricity component (123ka) revealed by EHA. Based on sedimentation rate, measured TOC and density data, the burial flux, delivery flux and primary productivity of organic carbon was estimated. By integrating redox proxies, we can discuss the controls on organic matter accumulation by primary production and preservation under the high-resolution sequence stratigraphic framework. Results show that high average organic carbon contents in the study interval are mainly attributed to high primary production. The results also show a good correlation between high organic carbon accumulation and intervals of transgression.
Emergent order in the kagome Ising magnet Dy3Mg2Sb3O14
Paddison, Joseph A. M.; Ong, Harapan S.; Hamp, James O.; Mukherjee, Paromita; Bai, Xiaojian; Tucker, Matthew G.; Butch, Nicholas P.; Castelnovo, Claudio; Mourigal, Martin; Dutton, S. E.
2016-01-01
The Ising model—in which degrees of freedom (spins) are binary valued (up/down)—is a cornerstone of statistical physics that shows rich behaviour when spins occupy a highly frustrated lattice such as kagome. Here we show that the layered Ising magnet Dy3Mg2Sb3O14 hosts an emergent order predicted theoretically for individual kagome layers of in-plane Ising spins. Neutron-scattering and bulk thermomagnetic measurements reveal a phase transition at ∼0.3 K from a disordered spin-ice-like regime to an emergent charge ordered state, in which emergent magnetic charge degrees of freedom exhibit three-dimensional order while spins remain partially disordered. Monte Carlo simulations show that an interplay of inter-layer interactions, spin canting and chemical disorder stabilizes this state. Our results establish Dy3Mg2Sb3O14 as a tuneable system to study interacting emergent charges arising from kagome Ising frustration. PMID:27996012
Formation kinetics of gemfibrozil chlorination reaction products: analysis and application.
Krkosek, Wendy H; Peldszus, Sigrid; Huck, Peter M; Gagnon, Graham A
2014-07-01
Aqueous chlorination kinetics of the lipid regulator gemfibrozil and the formation of reaction products were investigated in deionized water over the pH range 3 to 9, and in two wastewater matrices. Chlorine oxidation of gemfibrozil was found to be highly dependent on pH. No statistically significant degradation of gemfibrozil was observed at pH values greater than 7. Gemfibrozil oxidation between pH 4 and 7 was best represented by first order kinetics. At pH 3, formation of three reaction products was observed. 4'-C1Gem was the only reaction product formed from pH 4-7 and was modeled with zero order kinetics. Chlorine oxidation of gemfibrozil in two wastewater matrices followed second order kinetics. 4'-C1Gem was only formed in wastewater with pH below 7. Deionized water rate kinetic models were applied to two wastewater effluents with gemfibrozil concentrations reported in literature in order to calculate potential mass loading rates of 4'C1Gem to the receiving water.
High-statistics measurement of the η → 3 π 0 decay at the Mainz Microtron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakhov, S.; Abt, S.; Achenbach, P.
Here, the largest, at the moment, statistics of 7 × 10 6 η → 3π 0 decays, based on 6.2 × 10 7 η mesons produced in the γp → ηp reaction, has been accumulated by the A2 Collaboration at the Mainz Microtron, MAMI. It allowed a detailed study of the η → 3π 0 dynamics beyond its conventional parametrization with just the quadratic slope parameter α and enabled, for the first time, a mea- surement of the second-order term and a better understanding of the cusp structure in the neutral decay. The present data are also compared to recentmore » theoretical calculations that predict a nonlinear dependence along the quadratic distance from the Dalitz-plot center.« less
Rectifying full-counting statistics in a spin Seebeck engine
NASA Astrophysics Data System (ADS)
Tang, Gaomin; Chen, Xiaobin; Ren, Jie; Wang, Jian
2018-02-01
In terms of the nonequilibrium Green's function framework, we formulate the full-counting statistics of conjugate thermal spin transport in a spin Seebeck engine, which is made by a metal-ferromagnet insulator interface driven by a temperature bias. We obtain general expressions of scaled cumulant generating functions of both heat and spin currents that hold special fluctuation symmetry relations, and demonstrate intriguing properties, such as rectification and negative differential effects of high-order fluctuations of thermal excited spin current, maximum output spin power, and efficiency. The transport and noise depend on the strongly fluctuating electron density of states at the interface. The results are relevant for designing an efficient spin Seebeck engine and can broaden our view in nonequilibrium thermodynamics and the nonlinear phenomenon in quantum transport systems.
High-statistics measurement of the η → 3 π 0 decay at the Mainz Microtron
Prakhov, S.; Abt, S.; Achenbach, P.; ...
2018-06-07
Here, the largest, at the moment, statistics of 7 × 10 6 η → 3π 0 decays, based on 6.2 × 10 7 η mesons produced in the γp → ηp reaction, has been accumulated by the A2 Collaboration at the Mainz Microtron, MAMI. It allowed a detailed study of the η → 3π 0 dynamics beyond its conventional parametrization with just the quadratic slope parameter α and enabled, for the first time, a mea- surement of the second-order term and a better understanding of the cusp structure in the neutral decay. The present data are also compared to recentmore » theoretical calculations that predict a nonlinear dependence along the quadratic distance from the Dalitz-plot center.« less
NASA Astrophysics Data System (ADS)
Novikov, E. A.
1990-05-01
The influence of intermittency on turbulent diffusion is expressed in terms of the statistics of the dissipation field. The high-order moments of relative diffusion are obtained by using the concept of scale similarity of the breakdown coefficients (bdc). The method of bdc is useful for obtaining new models and general results, which then can be expressed in terms of multifractals. In particular, the concavity and other properties of spectral codimension are proved. Special attention is paid to the logarithmically periodic modulations. The parametrization of small-scale intermittent turbulence, which can be used for large-eddy simulation, is presented. The effect of molecular viscosity is taken into account in the spirit of the renorm group, but without spectral series, ɛ expansion, and fictitious random forces.
A Correlation between the Higgs Mass and Dark Matter
Hertzberg, Mark P.
2017-07-27
Depending on the value of the Higgs mass, the Standard Model acquires an unstable region at large Higgs field values due to RG running of couplings, which we evaluate at 2-loop order. For currently favored values of the Higgs mass, this renders the electroweak vacuum only metastable with a long lifetime. We argue on statistical grounds that the Higgs field would be highly unlikely to begin in the small field metastable region in the early universe, and thus some new physics should enter in the energy range of order of, or lower than, the instability scale to remove the largemore » field unstable region. We assume that Peccei-Quinn (PQ) dynamics enters to solve the strong CP problem and, for a PQ-scale in this energy range, may also remove the unstable region. We allow the PQ-scale to scan and argue, again on statistical grounds, that its value in our universe should be of order of the instability scale, rather than (significantly) lower. Since the Higgs mass determines the instability scale, which is argued to set the PQ-scale, and since the PQ-scale determines the axion properties, including its dark matter abundance, we are led to a correlation between the Higgs mass and the abundance of dark matter. We thus find the correlation to be in good agreement with current data.« less
A Correlation between the Higgs Mass and Dark Matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertzberg, Mark P.
Depending on the value of the Higgs mass, the Standard Model acquires an unstable region at large Higgs field values due to RG running of couplings, which we evaluate at 2-loop order. For currently favored values of the Higgs mass, this renders the electroweak vacuum only metastable with a long lifetime. We argue on statistical grounds that the Higgs field would be highly unlikely to begin in the small field metastable region in the early universe, and thus some new physics should enter in the energy range of order of, or lower than, the instability scale to remove the largemore » field unstable region. We assume that Peccei-Quinn (PQ) dynamics enters to solve the strong CP problem and, for a PQ-scale in this energy range, may also remove the unstable region. We allow the PQ-scale to scan and argue, again on statistical grounds, that its value in our universe should be of order of the instability scale, rather than (significantly) lower. Since the Higgs mass determines the instability scale, which is argued to set the PQ-scale, and since the PQ-scale determines the axion properties, including its dark matter abundance, we are led to a correlation between the Higgs mass and the abundance of dark matter. We thus find the correlation to be in good agreement with current data.« less
Syed, Shahbaz; Wang, Dongmei; Goulard, Debbie; Rich, Tom; Innes, Grant; Lang, Eddy
2013-07-05
Computerized physician order entry (CPOE) systems are designed to increase safety and improve quality of care; however, their impact on efficiency in the ED has not yet been validated. This study examined the impact of CPOE on process times for medication delivery, laboratory utilization and diagnostic imaging in the early, late and control phases of a regional ED-CPOE implementation. Three tertiary care hospitals serving a population in excess of 1 million inhabitants that initiated the same CPOE system during the same 3-week time window. Patients were stratified into three groupings: Control, Early CPOE and Late CPOE (n = 200 patients per group/hospital site). Eligible patients consisted of a stratified (40% CTAS 2 and 60% CTAS 3) random sample of all patients seen 30 days preceding CPOE implementation (Control), 30 days immediately after CPOE implementation (Early CPOE) and 5-6 months after CPOE implementation (Late CPOE). Primary outcomes were time to (TT) from physician assignment (MD-sign) up to MD-order completion. An ANOVA and t-test were employed for statistical analysis. In comparison with control, TT 1st MD-Ordered Medication decreased in both the Early and Late CPOE groups (102.6 min control, 62.8 Early and 65.7 late, p < 0.001). TT 1st MD-ordered laboratory results increased in both the Early and Late CPOE groups compared to Control (76.4, 85.3 and 73.8 min, respectively, p < 0.001). TT 1st X-Ray also significantly increased in both the Early and Late CPOE groups (80.4, 84.8 min, respectively, compared to 68.1, p < 0.001). Given that CT and ultrasound imaging inherently takes increased time, these imaging studies were not included, and only X-ray was examined. There was no statistical difference found between TT discharge and consult request. Regional implementation of CPOE afforded important efficiencies in time to medication delivery for high acuity ED patients. Increased times observed for laboratory and radiology results may reflect system issues outside of the emergency department and as a result of potential confounding may not be a reflection of CPOE impact.
Probing the neutrino mass ordering with KM3NeT-ORCA: analysis and perspectives
NASA Astrophysics Data System (ADS)
Capozzi, Francesco; Lisi, Eligio; Marrone, Antonio
2018-02-01
The discrimination of the two possible options for the neutrino mass ordering (normal or inverted) is a major goal for current and future neutrino oscillation experiments. Such a goal might be reached by observing high-statistics energy-angle spectra of events induced by atmospheric neutrinos and antineutrinos propagating in the Earth matter. Large volume water-Cherenkov detectors envisaged to this purpose include the so-called KM3NeT-ORCA project (in seawater) and the IceCube-PINGU project (in ice). Building upon a previous work focused on PINGU, we study in detail the effects of various systematic uncertainties on the ORCA sensitivity to the mass ordering, for the reference configuration with 9 m vertical spacing. We point out the need to control spectral shape uncertainties at the percent level, the effects of better priors on the {θ }23 mixing parameter, and the benefits of an improved flavor identification in reconstructed ORCA events.
An Automated Energy Detection Algorithm Based on Consecutive Mean Excision
2018-01-01
present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan
ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition.
Koslicki, David; Chatterjee, Saikat; Shahrivar, Damon; Walker, Alan W; Francis, Suzanna C; Fraser, Louise J; Vehkaperä, Mikko; Lan, Yueheng; Corander, Jukka
2015-01-01
Estimation of bacterial community composition from high-throughput sequenced 16S rRNA gene amplicons is a key task in microbial ecology. Since the sequence data from each sample typically consist of a large number of reads and are adversely impacted by different levels of biological and technical noise, accurate analysis of such large datasets is challenging. There has been a recent surge of interest in using compressed sensing inspired and convex-optimization based methods to solve the estimation problem for bacterial community composition. These methods typically rely on summarizing the sequence data by frequencies of low-order k-mers and matching this information statistically with a taxonomically structured database. Here we show that the accuracy of the resulting community composition estimates can be substantially improved by aggregating the reads from a sample with an unsupervised machine learning approach prior to the estimation phase. The aggregation of reads is a pre-processing approach where we use a standard K-means clustering algorithm that partitions a large set of reads into subsets with reasonable computational cost to provide several vectors of first order statistics instead of only single statistical summarization in terms of k-mer frequencies. The output of the clustering is then processed further to obtain the final estimate for each sample. The resulting method is called Aggregation of Reads by K-means (ARK), and it is based on a statistical argument via mixture density formulation. ARK is found to improve the fidelity and robustness of several recently introduced methods, with only a modest increase in computational complexity. An open source, platform-independent implementation of the method in the Julia programming language is freely available at https://github.com/dkoslicki/ARK. A Matlab implementation is available at http://www.ee.kth.se/ctsoftware.
Padmanabhan, Prema; Mrochen, Michael; Basuthkar, Subam; Viswanathan, Deepa; Joseph, Roy
2008-03-01
To compare the outcomes of wavefront-guided and wavefront-optimized treatment in fellow eyes of patients having laser in situ keratomileusis (LASIK) for myopia. Medical and Vision Research Foundation, Tamil Nadu, India. This prospective comparative study comprised 27 patients who had wavefront-guided LASIK in 1 eye and wavefront-optimized LASIK in the fellow eye. The Hansatome (Bausch & Lomb) was used to create a superior-hinged flap and the Allegretto laser (WaveLight Laser Technologie AG), for photoablation. The Allegretto wave analyzer was used to measure ocular wavefront aberrations and the Functional Acuity Contrast Test chart, to measure contrast sensitivity before and 1 month after LASIK. The refractive and visual outcomes and the changes in aberrations and contrast sensitivity were compared between the 2 treatment modalities. One month postoperatively, 92% of eyes in the wavefront-guided group and 85% in the wavefront-optimized group had uncorrected visual acuity of 20/20 or better; 93% and 89%, respectively, had a postoperative spherical equivalent refraction of +/-0.50 diopter. The differences between groups were not statistically significant. Wavefront-guided LASIK induced less change in 18 of 22 higher-order Zernike terms than wavefront-optimized LASIK, with the change in positive spherical aberration the only statistically significant one (P= .01). Contrast sensitivity improved at the low and middle spatial frequencies (not statistically significant) and worsened significantly at high spatial frequencies after wavefront-guided LASIK; there was a statistically significant worsening at all spatial frequencies after wavefront-optimized LASIK. Although both wavefront-guided and wavefront-optimized LASIK gave excellent refractive correction results, the former induced less higher-order aberrations and was associated with better contrast sensitivity.
Microcomputer-Based Acquisitions.
ERIC Educational Resources Information Center
Desmarais, Norman
1986-01-01
This discussion of three automated acquisitions systems--Bib-Base/Acq, The Book Trak Ordering System, and Card Datalog Acquisitions Module--covers searching and updating, editing, acquisitions functions and statistics, purchase orders and order file, budgeting and accounts maintenance, defining parameters, documentation, security, printing, and…
Rosenbaum, Paul R
2016-03-01
A common practice with ordered doses of treatment and ordered responses, perhaps recorded in a contingency table with ordered rows and columns, is to cut or remove a cross from the table, leaving the outer corners--that is, the high-versus-low dose, high-versus-low response corners--and from these corners to compute a risk or odds ratio. This little remarked but common practice seems to be motivated by the oldest and most familiar method of sensitivity analysis in observational studies, proposed by Cornfield et al. (1959), which says that to explain a population risk ratio purely as bias from an unobserved binary covariate, the prevalence ratio of the covariate must exceed the risk ratio. Quite often, the largest risk ratio, hence the one least sensitive to bias by this standard, is derived from the corners of the ordered table with the central cross removed. Obviously, the corners use only a portion of the data, so a focus on the corners has consequences for the standard error as well as for bias, but sampling variability was not a consideration in this early and familiar form of sensitivity analysis, where point estimates replaced population parameters. Here, this cross-cut analysis is examined with the aid of design sensitivity and the power of a sensitivity analysis. © 2015, The International Biometric Society.
Effect of revised high-heeled shoes on foot pressure and static balance during standing.
Bae, Young-Hyeon; Ko, Mansoo; Park, Young-Soul; Lee, Suk-Min
2015-04-01
[Purpose] The purpose of this study was to investigate the effects of revised high-heeled shoes on the foot pressure ratio and static balance during standing. [Subjects and Methods] A single-subject design was used, 15 healthy women wearing revised high-heeled shoes and general high-heeled shoes in a random order. The foot pressure ratio and static balance scores during standing were measured using a SpaceBalance 3D system. [Results] Forefoot and rearfoot pressures were significantly different between the 2 types of high-heeled shoes. Under the 3 conditions tested, the static balance score was higher for the revised high-heeled shoes than for the general high-heeled shoes, but this difference was not statistically significant. [Conclusion] Revised high-heeled shoes are preferable to general high-heeled shoes, as they result in normalization of normalized foot pressure and a positive effect on static balance.
Effect of revised high-heeled shoes on foot pressure and static balance during standing
Bae, Young-Hyeon; Ko, Mansoo; Park, Young-Soul; Lee, Suk-Min
2015-01-01
[Purpose] The purpose of this study was to investigate the effects of revised high-heeled shoes on the foot pressure ratio and static balance during standing. [Subjects and Methods] A single-subject design was used, 15 healthy women wearing revised high-heeled shoes and general high-heeled shoes in a random order. The foot pressure ratio and static balance scores during standing were measured using a SpaceBalance 3D system. [Results] Forefoot and rearfoot pressures were significantly different between the 2 types of high-heeled shoes. Under the 3 conditions tested, the static balance score was higher for the revised high-heeled shoes than for the general high-heeled shoes, but this difference was not statistically significant. [Conclusion] Revised high-heeled shoes are preferable to general high-heeled shoes, as they result in normalization of normalized foot pressure and a positive effect on static balance. PMID:25995572
Higher-order cumulants and spectral kurtosis for early detection of subterranean termites
NASA Astrophysics Data System (ADS)
de la Rosa, Juan José González; Moreno Muñoz, Antonio
2008-02-01
This paper deals with termite detection in non-favorable SNR scenarios via signal processing using higher-order statistics. The results could be extrapolated to all impulse-like insect emissions; the situation involves non-destructive termite detection. Fourth-order cumulants in time and frequency domains enhance the detection and complete the characterization of termite emissions, non-Gaussian in essence. Sliding higher-order cumulants offer distinctive time instances, as a complement to the sliding variance, which only reveal power excesses in the signal; even for low-amplitude impulses. The spectral kurtosis reveals non-Gaussian characteristics (the peakedness of the probability density function) associated to these non-stationary measurements, specially in the near ultrasound frequency band. Contrasted estimators have been used to compute the higher-order statistics. The inedited findings are shown via graphical examples.
Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu
2017-11-01
This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.
Opportunistic Beamforming with Wireless Powered 1-bit Feedback Through Rectenna Array
NASA Astrophysics Data System (ADS)
Krikidis, Ioannis
2015-11-01
This letter deals with the opportunistic beamforming (OBF) scheme for multi-antenna downlink with spatial randomness. In contrast to conventional OBF, the terminals return only 1-bit feedback, which is powered by wireless power transfer through a rectenna array. We study two fundamental topologies for the combination of the rectenna elements; the direct-current combiner and the radio-frequency combiner. The beam outage probability is derived in closed form for both combination schemes, by using high order statistics and stochastic geometry.
A Subband Coding Method for HDTV
NASA Technical Reports Server (NTRS)
Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.
1995-01-01
This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.
Statistical Mechanics and Dynamics of the Outer Solar System.I. The Jupiter/Saturn Zone
NASA Technical Reports Server (NTRS)
Grazier, K. R.; Newman, W. I.; Kaula, W. M.; Hyman, J. M.
1996-01-01
We report on numerical simulations designed to understand how the solar system evolved through a winnowing of planetesimals accreeted from the early solar nebula. This sorting process is driven by the energy and angular momentum and continues to the present day. We reconsider the existence and importance of stable niches in the Jupiter/Saturn Zone using greatly improved numerical techniques based on high-order optimized multi-step integration schemes coupled to roundoff error minimizing methods.
Statistical Research of Investment Development of Russian Regions
ERIC Educational Resources Information Center
Burtseva, Tatiana A.; Aleshnikova, Vera I.; Dubovik, Mayya V.; Naidenkova, Ksenya V.; Kovalchuk, Nadezda B.; Repetskaya, Natalia V.; Kuzmina, Oksana G.; Surkov, Anton A.; Bershadskaya, Olga I.; Smirennikova, Anna V.
2016-01-01
This article the article is concerned with a substantiation of procedures ensuring the implementation of statistical research and monitoring of investment development of the Russian regions, which would be pertinent for modern development of the state statistics. The aim of the study is to develop the methodological framework in order to estimate…
American Automobile and Light Truck Statistics Update
ERIC Educational Resources Information Center
Feldman, Bernard J.
2014-01-01
Given that transportation is an essential topic in any Physics and Society or Energy course, it is necessary to have useful statistics on transportation in order to have a reasoned discussion on this topic. And a major component of the transportation picture is the automobile. This paper presents updated transportation statistics for American…
Education Statistics Quarterly. Volume 5, Issue 3, 2003. NCES 2005-609
ERIC Educational Resources Information Center
National Center for Education Statistics, 2004
2004-01-01
The National Center for Education Statistics (NCES) fulfills a congressional mandate to collect and report "statistics and information showing the condition and progress of education in the United States and other nations in order to promote and accelerate the improvement of American education." The "Quarterly" offers an accessible, convenient…
Flow Chamber System for the Statistical Evaluation of Bacterial Colonization on Materials
Menzel, Friederike; Conradi, Bianca; Rodenacker, Karsten; Gorbushina, Anna A.; Schwibbert, Karin
2016-01-01
Biofilm formation on materials leads to high costs in industrial processes, as well as in medical applications. This fact has stimulated interest in the development of new materials with improved surfaces to reduce bacterial colonization. Standardized tests relying on statistical evidence are indispensable to evaluate the quality and safety of these new materials. We describe here a flow chamber system for biofilm cultivation under controlled conditions with a total capacity for testing up to 32 samples in parallel. In order to quantify the surface colonization, bacterial cells were DAPI (4`,6-diamidino-2-phenylindole)-stained and examined with epifluorescence microscopy. More than 100 images of each sample were automatically taken and the surface coverage was estimated using the free open source software g’mic, followed by a precise statistical evaluation. Overview images of all gathered pictures were generated to dissect the colonization characteristics of the selected model organism Escherichia coli W3310 on different materials (glass and implant steel). With our approach, differences in bacterial colonization on different materials can be quantified in a statistically validated manner. This reliable test procedure will support the design of improved materials for medical, industrial, and environmental (subaquatic or subaerial) applications. PMID:28773891
A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data
Vinaixa, Maria; Samino, Sara; Saez, Isabel; Duran, Jordi; Guinovart, Joan J.; Yanes, Oscar
2012-01-01
Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples. PMID:24957762
A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data.
Vinaixa, Maria; Samino, Sara; Saez, Isabel; Duran, Jordi; Guinovart, Joan J; Yanes, Oscar
2012-10-18
Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples.
Anomaly-specified virtual dimensionality
NASA Astrophysics Data System (ADS)
Chen, Shih-Yu; Paylor, Drew; Chang, Chein-I.
2013-09-01
Virtual dimensionality (VD) has received considerable interest where VD is used to estimate the number of spectral distinct signatures, denoted by p. Unfortunately, no specific definition is provided by VD for what a spectrally distinct signature is. As a result, various types of spectral distinct signatures determine different values of VD. There is no one value-fit-all for VD. In order to address this issue this paper presents a new concept, referred to as anomaly-specified VD (AS-VD) which determines the number of anomalies of interest present in the data. Specifically, two types of anomaly detection algorithms are of particular interest, sample covariance matrix K-based anomaly detector developed by Reed and Yu, referred to as K-RXD and sample correlation matrix R-based RXD, referred to as R-RXD. Since K-RXD is only determined by 2nd order statistics compared to R-RXD which is specified by statistics of the first two orders including sample mean as the first order statistics, the values determined by K-RXD and R-RXD will be different. Experiments are conducted in comparison with widely used eigen-based approaches.
Low energy peripheral scaling in nucleon-nucleon scattering and uncertainty quantification
NASA Astrophysics Data System (ADS)
Ruiz Simo, I.; Amaro, J. E.; Ruiz Arriola, E.; Navarro Pérez, R.
2018-03-01
We analyze the peripheral structure of the nucleon-nucleon interaction for LAB energies below 350 MeV. To this end we transform the scattering matrix into the impact parameter representation by analyzing the scaled phase shifts (L + 1/2) δ JLS (p) and the scaled mixing parameters (L + 1/2)ɛ JLS (p) in terms of the impact parameter b = (L + 1/2)/p. According to the eikonal approximation, at large angular momentum L these functions should become an universal function of b, independent on L. This allows to discuss in a rather transparent way the role of statistical and systematic uncertainties in the different long range components of the two-body potential. Implications for peripheral waves obtained in chiral perturbation theory interactions to fifth order (N5LO) or from the large body of NN data considered in the SAID partial wave analysis are also drawn from comparing them with other phenomenological high-quality interactions, constructed to fit scattering data as well. We find that both N5LO and SAID peripheral waves disagree more than 5σ with the Granada-2013 statistical analysis, more than 2σ with the 6 statistically equivalent potentials fitting the Granada-2013 database and about 1σ with the historical set of 13 high-quality potentials developed since the 1993 Nijmegen analysis.
A new JPEG-based steganographic algorithm for mobile devices
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.
2006-05-01
Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.
Statistical anisotropy in free turbulence for mixing layers at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Gardner, Patrick J.; Roggemann, Michael C.; Welsh, Byron M.; Bowersox, Rodney D.; Luke, Theodore E.
1996-08-01
A lateral shearing interferometer was used to measure the slope of perturbed wave fronts after propagating through free turbulent mixing layers. Shearing interferometers provide a two-dimensional flow visualization that is nonintrusive. Slope measurements were used to reconstruct the phase of the turbulence-corrupted wave front. The random phase fluctuations induced by the mixing layer were captured in a large ensemble of wave-front measurements. Experiments were performed on an unbounded, plane shear mixing layer of helium and nitrogen gas at fixed velocities and high Reynolds numbers for six locations in the flow development. Statistical autocorrelation functions and structure functions were computed on the reconstructed phase maps. The autocorrelation function results indicated that the turbulence-induced phase fluctuations were not wide-sense stationary. The structure functions exhibited statistical homogeneity, indicating that the phase fluctuations were stationary in first increments. However, the turbulence-corrupted phase was not isotropic. A five-thirds power law is shown to fit orthogonal slices of the structure function, analogous to the Kolmogorov model for isotropic turbulence. Strehl ratios were computed from the phase structure functions and compared with classical estimates that assume isotropy. The isotropic models are shown to overestimate the optical degradation by nearly 3 orders of magnitude compared with the structure function calculations.
NASA Astrophysics Data System (ADS)
di Luca, Alejandro; de Elía, Ramón; Laprise, René
2012-03-01
Regional Climate Models (RCMs) constitute the most often used method to perform affordable high-resolution regional climate simulations. The key issue in the evaluation of nested regional models is to determine whether RCM simulations improve the representation of climatic statistics compared to the driving data, that is, whether RCMs add value. In this study we examine a necessary condition that some climate statistics derived from the precipitation field must satisfy in order that the RCM technique can generate some added value: we focus on whether the climate statistics of interest contain some fine spatial-scale variability that would be absent on a coarser grid. The presence and magnitude of fine-scale precipitation variance required to adequately describe a given climate statistics will then be used to quantify the potential added value (PAV) of RCMs. Our results show that the PAV of RCMs is much higher for short temporal scales (e.g., 3-hourly data) than for long temporal scales (16-day average data) due to the filtering resulting from the time-averaging process. PAV is higher in warm season compared to cold season due to the higher proportion of precipitation falling from small-scale weather systems in the warm season. In regions of complex topography, the orographic forcing induces an extra component of PAV, no matter the season or the temporal scale considered. The PAV is also estimated using high-resolution datasets based on observations allowing the evaluation of the sensitivity of changing resolution in the real climate system. The results show that RCMs tend to reproduce relatively well the PAV compared to observations although showing an overestimation of the PAV in warm season and mountainous regions.
Pingault, Jean Baptiste; Côté, Sylvana M; Petitclerc, Amélie; Vitaro, Frank; Tremblay, Richard E
2015-01-01
Parental educational expectations have been associated with children's educational attainment in a number of long-term longitudinal studies, but whether this relationship is causal has long been debated. The aims of this prospective study were twofold: 1) test whether low maternal educational expectations contributed to failure to graduate from high school; and 2) compare the results obtained using different strategies for accounting for confounding variables (i.e. multivariate regression and propensity score matching). The study sample included 1,279 participants from the Quebec Longitudinal Study of Kindergarten Children. Maternal educational expectations were assessed when the participants were aged 12 years. High school graduation—measuring educational attainment—was determined through the Quebec Ministry of Education when the participants were aged 22-23 years. Findings show that when using the most common statistical approach (i.e. multivariate regressions to adjust for a restricted set of potential confounders) the contribution of low maternal educational expectations to failure to graduate from high school was statistically significant. However, when using propensity score matching, the contribution of maternal expectations was reduced and remained statistically significant only for males. The results of this study are consistent with the possibility that the contribution of parental expectations to educational attainment is overestimated in the available literature. This may be explained by the use of a restricted range of potential confounding variables as well as the dearth of studies using appropriate statistical techniques and study designs in order to minimize confounding. Each of these techniques and designs, including propensity score matching, has its strengths and limitations: A more comprehensive understanding of the causal role of parental expectations will stem from a convergence of findings from studies using different techniques and designs.
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
NASA Astrophysics Data System (ADS)
Rypdal, Martin; Sirnes, Espen; Løvsletten, Ola; Rypdal, Kristoffer
2013-08-01
Maximum likelihood estimation techniques for multifractal processes are applied to high-frequency data in order to quantify intermittency in the fluctuations of asset prices. From time records as short as one month these methods permit extraction of a meaningful intermittency parameter λ characterising the degree of volatility clustering. We can therefore study the time evolution of volatility clustering and test the statistical significance of this variability. By analysing data from the Oslo Stock Exchange, and comparing the results with the investment grade spread, we find that the estimates of λ are lower at times of high market uncertainty.
Status of the Whipple Observatory Cerenkov air shower imaging telescope array
NASA Technical Reports Server (NTRS)
Akerlof, C. W.; Cawley, M. F.; Fegan, D. J.; Fennell, S.; Freeman, S.; Frishman, D.; Harris, K.; Hillas, A. M.; Jennings, D.; Lamb, R. C.
1992-01-01
Recently the power of the Cerenkov imaging technique in Very High Energy gamma-ray astronomy was demonstrated by the detection of the Crab nebula at high statistical significance. In order to further develop this technique to allow the detection of weaker or more distant sources a second 10 m class reflector was constructed about 120 m from the original instrument. The addition of the second reflector will allow both a reduction in the energy threshold and an improvement in the rejection of the hadronic background. The design and construction of the second reflector, Gamma Ray Astrophysics New Imaging TElescope (GRANITE) is described.
Ordered mapping of 3 alphoid DNA subsets on human chromosome 22
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonacci, R.; Baldini, A.; Archidiacono, N.
1994-09-01
Alpha satellite DNA consists of tandemly repeated monomers of 171 bp clustered in the centromeric region of primate chromosomes. Sequence divergence between subsets located in different human chromosomes is usually high enough to ensure chromosome-specific hybridization. Alphoid probes specific for almost every human chromosome have been reported. A single chromosome can carry different subsets of alphoid DNA and some alphoid subsets can be shared by different chromosomes. We report the physical order of three alphoid DNA subsets on human chromosome 22 determined by a combination of low and high resolution cytological mapping methods. Results visually demonstrate the presence of threemore » distinct alphoid DNA domains at the centromeric region of chromosome 22. We have measured the interphase distances between the three probes in three-color FISH experiments. Statistical analysis of the results indicated the order of the subsets. Two color experiments on prometaphase chromosomes established the order of the three domains relative to the arms of chromosome 22 and confirmed the results obtained using interphase mapping. This demonstrates the applicability of interphase mapping for alpha satellite DNA orderering. However, in our experiments, interphase mapping did not provide any information about the relationship between extremities of the repeat arrays. This information was gained from extended chromatin hybridization. The extremities of two of the repeat arrays were seen to be almost overlapping whereas the third repeat array was clearly separated from the other two. Our data show the value of extended chromatin hybridization as a complement of other cytological techniques for high resolution mapping of repetitive DNA sequences.« less
NASA Astrophysics Data System (ADS)
Gorkunov, M. V.; Osipov, M. A.; Kapernaum, N.; Nonnenmacher, D.; Giesselmann, F.
2011-11-01
A molecular statistical theory of the smectic A phase is developed taking into account specific interactions between different molecular fragments which enables one to describe different microscopic scenario of the transition into the smectic phase. The effects of nanoscale segregation are described using molecular models with different combinations of attractive and repulsive sites. These models have been used to calculate numerically coefficients in the mean filed potential as functions of molecular model parameters and the period of the smectic structure. The same coefficients are calculated also for a conventional smectic with standard Gay-Berne interaction potential which does not promote the segregation. The free energy is minimized numerically to calculate the order parameters of the smectic A phases and to study the nature of the smectic transition in both systems. It has been found that in conventional materials the smectic order can be stabilized only when the orientational order is sufficiently high, In contrast, in materials with nanosegregation the smectic order develops mainly in the form of the orientational-translational wave while the nematic order parameter remains relatively small. Microscopic mechanisms of smectic ordering in both systems are discussed in detail, and the results for smectic order parameters are compared with experimental data for materials of various molecular structure.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
Koltun, G.F.
2013-01-01
This report presents the results of a study to assess potential water availability from the Atwood, Leesville, and Tappan Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for the Atwood Lake to 73 calendar years for the Leesville and Tappan Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October and February. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.
Liquid crystals of carbon nanotubes and graphene.
Zakri, Cécile; Blanc, Christophe; Grelet, Eric; Zamora-Ledezma, Camilo; Puech, Nicolas; Anglaret, Eric; Poulin, Philippe
2013-04-13
Liquid crystal ordering is an opportunity to develop novel materials and applications with spontaneously aligned nanotubes or graphene particles. Nevertheless, achieving high orientational order parameter and large monodomains remains a challenge. In addition, our restricted knowledge of the structure of the currently available materials is a limitation for fundamental studies and future applications. This paper presents recent methodologies that have been developed to achieve large monodomains of nematic liquid crystals. These allow quantification and increase of their order parameters. Nematic ordering provides an efficient way to prepare conductive films that exhibit anisotropic properties. In particular, it is shown how the electrical conductivity anisotropy increases with the order parameter of the nematic liquid crystal. The order parameter can be tuned by controlling the length and entanglement of the nanotubes. In the second part of the paper, recent results on graphene liquid crystals are reported. The possibility to obtain water-based liquid crystals stabilized by surfactant molecules is demonstrated. Structural and thermodynamic characterizations provide indirect but statistical information on the dimensions of the graphene flakes. From a general point of view, this work presents experimental approaches to optimize the use of nanocarbons as liquid crystals and provides new methodologies for the still challenging characterization of such materials.
Regression Models of Quarterly Overhead Costs for Six Government Aerospace Contractors.
1986-03-01
34 Testing ,, for Serial Correlation After Least Squares %Regression, Econometrica, Vol. 36, No. 1, pp. 133-150, January 1968. Intrili8ator M.D., Econometric ...to be superior. These two estimators are both two-stage estimators that are calculated utilizing Wallis’s test statistic for fourth-order...utilizing Wallis’s test statistic for fourth-order autocorrelation. NTIS C F’,& D tI1C T - .1 I -. . . ..- rJ ,. *p J • - DA 3
On the Power Functions of Test Statistics in Order Restricted Inference.
1984-10-01
California-Davis Actuarial Science Davis, California 95616 The University of Iowa Iowa City, Iowa 52242 *F. T. Wright Department of Mathematics and...34 SUMMARY --We study the power functions of both the likelihood ratio and con- trast statistics for detecting a totally ordered trend in a collection...samples from normal populations, Bartholomew (1959 a,b; 1961) studied the likelihood ratio tests (LRTs) for H0 versus H -H assuming in one case that
Modelling 1-minute directional observations of the global irradiance.
NASA Astrophysics Data System (ADS)
Thejll, Peter; Pagh Nielsen, Kristian; Andersen, Elsa; Furbo, Simon
2016-04-01
Direct and diffuse irradiances from the sky has been collected at 1-minute intervals for about a year from the experimental station at the Technical University of Denmark for the IEA project "Solar Resource Assessment and Forecasting". These data were gathered by pyrheliometers tracking the Sun, as well as with apertured pyranometers gathering 1/8th and 1/16th of the light from the sky in 45 degree azimuthal ranges pointed around the compass. The data are gathered in order to develop detailed models of the potentially available solar energy and its variations at high temporal resolution in order to gain a more detailed understanding of the solar resource. This is important for a better understanding of the sub-grid scale cloud variation that cannot be resolved with climate and weather models. It is also important for optimizing the operation of active solar energy systems such as photovoltaic plants and thermal solar collector arrays, and for passive solar energy and lighting to buildings. We present regression-based modelling of the observed data, and focus, here, on the statistical properties of the model fits. Using models based on the one hand on what is found in the literature and on physical expectations, and on the other hand on purely statistical models, we find solutions that can explain up to 90% of the variance in global radiation. The models leaning on physical insights include terms for the direct solar radiation, a term for the circum-solar radiation, a diffuse term and a term for the horizon brightening/darkening. The purely statistical model is found using data- and formula-validation approaches picking model expressions from a general catalogue of possible formulae. The method allows nesting of expressions, and the results found are dependent on and heavily constrained by the cross-validation carried out on statistically independent testing and training data-sets. Slightly better fits -- in terms of variance explained -- is found using the purely statistical fitting/searching approach. We describe the methods applied, results found, and discuss the different potentials of the physics- and statistics-only based model-searches.
Vinciotti, Veronica; Liu, Xiaohui; Turk, Rolf; de Meijer, Emile J; 't Hoen, Peter A C
2006-04-03
The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gamma-sarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it allows fast interpretation of the temporal behaviour of genes. The R code is available from V.V. The microarray data have been submitted to GEO under series GSE1574 and GSE3523.
Vinciotti, Veronica; Liu, Xiaohui; Turk, Rolf; de Meijer, Emile J; 't Hoen, Peter AC
2006-01-01
Background The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. Results We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gamma-sarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. Conclusion The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it allows fast interpretation of the temporal behaviour of genes. The R code is available from V.V. The microarray data have been submitted to GEO under series GSE1574 and GSE3523. PMID:16584545
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, J; Budzevich, M; Moros, E
Purpose: To investigate the relationship between quantitative image features (i.e. radiomics) and statistical fluctuations (i.e. electronic noise) in clinical Computed Tomography (CT) using the standardized American College of Radiology (ACR) CT accreditation phantom and patient images. Methods: Three levels of uncorrelated Gaussian noise were added to CT images of phantom and patients (20) acquired in static mode and respiratory tracking mode. We calculated the noise-power spectrum (NPS) of the original CT images of the phantom, and of the phantom images with added Gaussian noise with means of 50, 80, and 120 HU. Concurrently, on patient images (original and noise-added images),more » image features were calculated: 14 shape, 19 intensity (1st order statistics from intensity volume histograms), 18 GLCM features (2nd order statistics from grey level co-occurrence matrices) and 11 RLM features (2nd order statistics from run-length matrices). These features provide the underlying structural information of the images. GLCM (size 128x128) was calculated with a step size of 1 voxel in 13 directions and averaged. RLM feature calculation was performed in 13 directions with grey levels binning into 128 levels. Results: Adding the electronic noise to the images modified the quality of the NPS, shifting the noise from mostly correlated to mostly uncorrelated voxels. The dramatic increase in noise texture did not affect image structure/contours significantly for patient images. However, it did affect the image features and textures significantly as demonstrated by GLCM differences. Conclusion: Image features are sensitive to acquisition factors (simulated by adding uncorrelated Gaussian noise). We speculate that image features will be more difficult to detect in the presence of electronic noise (an uncorrelated noise contributor) or, for that matter, any other highly correlated image noise. This work focuses on the effect of electronic, uncorrelated, noise and future work shall examine the influence of changes in quantum noise on the features. J. Oliver was supported by NSF FGLSAMP BD award HRD #1139850 and the McKnight Doctoral Fellowship.« less
Complex Sequencing Rules of Birdsong Can be Explained by Simple Hidden Markov Processes
Katahira, Kentaro; Suzuki, Kenta; Okanoya, Kazuo; Okada, Masato
2011-01-01
Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors such as human speech and musical performance, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical properties of the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable labeles, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model; GMM), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex behavioral sequences with higher-order dependencies. PMID:21915345
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Complete PMD compensation in 40-Gbit/s optical transmission system
NASA Astrophysics Data System (ADS)
Luo, Rui; Li, Tangjun; Wang, Muguang; Cui, Jie; Jian, Shuisheng
2004-04-01
In this paper, we successfully demonstrated automatic PMD compensation in 40Gbit/s NRZ transmission for the first time. Using a PMD monitor of 20GHz intensity extracted from the receive 40Gbit/s NRZ base band signal, we accomplished the feedback control of an optical PMD compensator consisting of a polarization controller and a polarization-maintaining fiber. And we report the statistical assessment of an adaptive optical PMD compensator at 40Gbit/s. The mitigator, described in, is experimentally tested in many PMD conditions (not limited to first order) covering Maxwellian-like PMD statistics. Experimental results, including bit error rate measurements, are successfully compared with theory, hereby demonstrating the compensator efficiency at 40Gbit/s. Furthermore, this letter introduces a two-stage PMD compensator. Our experimental results shows that, the compensators based on the two-stages of compensator can be used to PMD compensation in a 40Gbit/s OTDM system with 60 km high PMD fiber. The first-order PMD was max.274ps before PMD compensation. It was smaller than 7ps after PMD compensation. At the same time, the tunable FBG have a function of dispersion compensation.
Tuğcu-Demiröz, Fatmanur; Gonzalez-Alvarez, Isabel; Gonzalez-Alvarez, Marta; Bermejo, Marival
2014-10-01
The aim of the present study was to develop a method for water flux reabsorption measurement in Doluisio's Perfusion Technique based on the use of phenol red as a non-absorbable marker and to validate it by comparison with gravimetric procedure. The compounds selected for the study were metoprolol, atenolol, cimetidine and cefadroxil in order to include low, intermediate and high permeability drugs absorbed by passive diffusion and by carrier mediated mechanism. The intestinal permeabilities (Peff) of the drugs were obtained in male and female Wistar rats and calculated using both methods of water flux correction. The absorption rate coefficients of all the assayed compounds did not show statistically significant differences between male and female rats consequently all the individual values were combined to compare between reabsorption methods. The absorption rate coefficients and permeability values did not show statistically significant differences between the two strategies of concentration correction. The apparent zero order water absorption coefficients were also similar in both correction procedures. In conclusion gravimetric and phenol red method for water reabsorption correction are accurate and interchangeable for permeability estimation in closed loop perfusion method. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Böhm, Fabian; Grosse, Nicolai B.; Kolarczik, Mirco; Herzog, Bastian; Achtstein, Alexander; Owschimikow, Nina; Woggon, Ulrike
2017-09-01
Quantum state tomography and the reconstruction of the photon number distribution are techniques to extract the properties of a light field from measurements of its mean and fluctuations. These techniques are particularly useful when dealing with macroscopic or mesoscopic systems, where a description limited to the second order autocorrelation soon becomes inadequate. In particular, the emission of nonclassical light is expected from mesoscopic quantum dot systems strongly coupled to a cavity or in systems with large optical nonlinearities. We analyze the emission of a quantum dot-semiconductor optical amplifier system by quantifying the modifications of a femtosecond laser pulse propagating through the device. Using a balanced detection scheme in a self-heterodyning setup, we achieve precise measurements of the quadrature components and their fluctuations at the quantum noise limit1. We resolve the photon number distribution and the thermal-to-coherent evolution in the photon statistics of the emission. The interferometric detection achieves a high sensitivity in the few photon limit. From our data, we can also reconstruct the second order autocorrelation function with higher precision and time resolution compared with classical Hanbury Brown-Twiss experiments.
Offset Stream Technology Test-Summary of Results
NASA Technical Reports Server (NTRS)
Brown, Clifford A.; Bridges, James E.; Henderson, Brenda
2007-01-01
Statistical jet noise prediction codes that accurately predict spectral directivity for both cold and hot jets are highly sought both in industry and academia. Their formulation, whether based upon manipulations of the Navier-Stokes equations or upon heuristic arguments, require substantial experimental observation of jet turbulence statistics. Unfortunately, the statistics of most interest involve the space-time correlation of flow quantities, especially velocity. Until the last 10 years, all turbulence statistics were made with single-point probes, such as hotwires or laser Doppler anemometry. Particle image velocimetry (PIV) brought many new insights with its ability to measure velocity fields over large regions of jets simultaneously; however, it could not measure velocity at rates higher than a few fields per second, making it unsuitable for obtaining temporal spectra and correlations. The development of time-resolved PIV, herein called TR-PIV, has removed this limitation, enabling measurement of velocity fields at high resolution in both space and time. In this paper, ground-breaking results from the application of TR-PIV to single-flow hot jets are used to explore the impact of heat on turbulent statistics of interest to jet noise models. First, a brief summary of validation studies is reported, undertaken to show that the new technique produces the same trusted results as hotwire at cold, low-speed jets. Second, velocity spectra from cold and hot jets are compared to see the effect of heat on the spectra. It is seen that heated jets possess 10 percent more turbulence intensity compared to the unheated jets with the same velocity. The spectral shapes, when normalized using Strouhal scaling, are insensitive to temperature if the stream-wise location is normalized relative to the potential core length. Similarly, second order velocity correlations, of interest in modeling of jet noise sources, are also insensitive to temperature as well.
Effect of Temperature on Jet Velocity Spectra
NASA Technical Reports Server (NTRS)
Bridges, James E.; Wernet, Mark P.
2007-01-01
Statistical jet noise prediction codes that accurately predict spectral directivity for both cold and hot jets are highly sought both in industry and academia. Their formulation, whether based upon manipulations of the Navier-Stokes equations or upon heuristic arguments, require substantial experimental observation of jet turbulence statistics. Unfortunately, the statistics of most interest involve the space-time correlation of flow quantities, especially velocity. Until the last 10 years, all turbulence statistics were made with single-point probes, such as hotwires or laser Doppler anemometry. Particle image velocimetry (PIV) brought many new insights with its ability to measure velocity fields over large regions of jets simultaneously; however, it could not measure velocity at rates higher than a few fields per second, making it unsuitable for obtaining temporal spectra and correlations. The development of time-resolved PIV, herein called TR-PIV, has removed this limitation, enabling measurement of velocity fields at high resolution in both space and time. In this paper, ground-breaking results from the application of TR-PIV to single-flow hot jets are used to explore the impact of heat on turbulent statistics of interest to jet noise models. First, a brief summary of validation studies is reported, undertaken to show that the new technique produces the same trusted results as hotwire at cold, low-speed jets. Second, velocity spectra from cold and hot jets are compared to see the effect of heat on the spectra. It is seen that heated jets possess 10 percent more turbulence intensity compared to the unheated jets with the same velocity. The spectral shapes, when normalized using Strouhal scaling, are insensitive to temperature if the stream-wise location is normalized relative to the potential core length. Similarly, second order velocity correlations, of interest in modeling of jet noise sources, are also insensitive to temperature as well.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2000-01-01
The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.
NASA Astrophysics Data System (ADS)
Wang, Dong
2016-03-01
Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.
Scarpazza, Cristina; Nichols, Thomas E; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea
2016-01-01
In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used to characterize neuroanatomical alterations in individual subjects as long as non-parametric statistics are employed.
Supervised Classification Techniques for Hyperspectral Data
NASA Technical Reports Server (NTRS)
Jimenez, Luis O.
1997-01-01
The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many mm-e spectral intervals than previous possible. An example of this technology is the AVIRIS system, which collects image data in 220 bands. The increased dimensionality of such hyperspectral data provides a challenge to the current techniques for analyzing such data. Human experience in three dimensional space tends to mislead one's intuition of geometrical and statistical properties in high dimensional space, properties which must guide our choices in the data analysis process. In this paper high dimensional space properties are mentioned with their implication for high dimensional data analysis in order to illuminate the next steps that need to be taken for the next generation of hyperspectral data classifiers.
25 CFR 12.41 - Who keeps statistics for Indian country law enforcement activities?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Who keeps statistics for Indian country law enforcement activities? 12.41 Section 12.41 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Records and Information § 12.41 Who keeps statistics for Indian country...
25 CFR 12.41 - Who keeps statistics for Indian country law enforcement activities?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Who keeps statistics for Indian country law enforcement activities? 12.41 Section 12.41 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Records and Information § 12.41 Who keeps statistics for Indian country...
25 CFR 12.41 - Who keeps statistics for Indian country law enforcement activities?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Who keeps statistics for Indian country law enforcement activities? 12.41 Section 12.41 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Records and Information § 12.41 Who keeps statistics for Indian country...
ERIC Educational Resources Information Center
Cassel, Russell N.
This paper relates educational and psychological statistics to certain "Research Statistical Tools" (RSTs) necessary to accomplish and understand general research in the behavioral sciences. Emphasis is placed on acquiring an effective understanding of the RSTs and to this end they are are ordered to a continuum scale in terms of individual…
Linking Science and Statistics: Curriculum Expectations in Three Countries
ERIC Educational Resources Information Center
Watson, Jane M.
2017-01-01
This paper focuses on the curriculum links between statistics and science that teachers need to understand and apply in order to be effective teachers of the two fields of study. Meaningful statistics does not exist without context and science is the context for this paper. Although curriculum documents differ from country to country, this paper…
A First Assignment to Create Student Buy-In in an Introductory Business Statistics Course
ERIC Educational Resources Information Center
Newfeld, Daria
2016-01-01
This paper presents a sample assignment to be administered after the first two weeks of an introductory business focused statistics course in order to promote student buy-in. This assignment integrates graphical displays of data, descriptive statistics and cross-tabulation analysis through the lens of a marketing analysis study. A marketing sample…
Computational Complexity of Bosons in Linear Networks
2017-03-01
photon statistics while strongly reducing emission probabilities: thus leading experimental teams pursuing large-scale BOSONSAMPLING have faced a hard...Potentially, this could motivate new validation protocols exploiting statistics that include this temporal degree of freedom. The impact of...photon- statistics polluted by higher-order terms, which can be mistakenly interpreted as decreased photon-indistinguishability. In fact, in many cases
Williams, Mobolaji
2018-01-01
The field of disordered systems in statistical physics provides many simple models in which the competing influences of thermal and nonthermal disorder lead to new phases and nontrivial thermal behavior of order parameters. In this paper, we add a model to the subject by considering a disordered system where the state space consists of various orderings of a list. As in spin glasses, the disorder of such "permutation glasses" arises from a parameter in the Hamiltonian being drawn from a distribution of possible values, thus allowing nominally "incorrect orderings" to have lower energies than "correct orderings" in the space of permutations. We analyze a Gaussian, uniform, and symmetric Bernoulli distribution of energy costs, and, by employing Jensen's inequality, derive a simple condition requiring the permutation glass to always transition to the correctly ordered state at a temperature lower than that of the nondisordered system, provided that this correctly ordered state is accessible. We in turn find that in order for the correctly ordered state to be accessible, the probability that an incorrectly ordered component is energetically favored must be less than the inverse of the number of components in the system. We show that all of these results are consistent with a replica symmetric ansatz of the system. We conclude by arguing that there is no distinct permutation glass phase for the simplest model considered here and by discussing how to extend the analysis to more complex Hamiltonians capable of novel phase behavior and replica symmetry breaking. Finally, we outline an apparent correspondence between the presented system and a discrete-energy-level fermion gas. In all, the investigation introduces a class of exactly soluble models into statistical mechanics and provides a fertile ground to investigate statistical models of disorder.
NASA Astrophysics Data System (ADS)
Yuan, Wuhan; Mohabir, Amar; Tutuncuoglu, Gozde; Filler, Michael; Feldman, Leonard; Shan, Jerry
2017-11-01
Solution-based, contactless methods for determining the electrical conductivity of nanowires and nanotubes have unique advantages over conventional techniques in terms of high throughput and compatibility with further solution-based processing and assembly methods. Here, we describe the solution-based electro-orientation spectroscopy (EOS) method, in which nanowire conductivity is measured from the AC-electric-field-induced alignment rate of the nanowire in a suspending fluid. The particle conductivity is determined from the measured crossover frequency between conductivity-dominated, low-frequency alignment to the permittivity-dominated, high-frequency regime. We discuss the extension of the EOS measurement range by an order-of-magnitude, taking advantage of the high dielectric constant of deionized water. With water and other fluids, we demonstrate that EOS can quantitatively characterize the electrical conductivities of nanowires over a 7-order-of-magnitude range, 10-5 to 102 S/m. We highlight the efficiency and utility of EOS for nanomaterial characterization by statistically characterizing the variability of semiconductor nanowires of the same nominal composition, and studying the connection between synthesis parameters and properties. NSF CBET-1604931.
Understanding the latent structure of the emotional disorders in children and adolescents.
Trosper, Sarah E; Whitton, Sarah W; Brown, Timothy A; Pincus, Donna B
2012-05-01
Investigators are persistently aiming to clarify structural relationships among the emotional disorders in efforts to improve diagnostic classification. The high co-occurrence of anxiety and mood disorders, however, has led investigators to portray the current structure of anxiety and depression in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV, APA 2000) as more descriptive than empirical. This study assesses various structural models in a clinical sample of youths with emotional disorders. Three a priori factor models were tested, and the model that provided the best fit to the data showed the dimensions of anxiety and mood disorders to be hierarchically organized within a single, higher-order factor. This supports the prevailing view that the co-occurrence of anxiety and mood disorders in children is in part due to a common vulnerability (e.g., negative affectivity). Depression and generalized anxiety loaded more highly onto the higher-order factor than the other disorders, a possible explanation for the particularly high rates of comorbidity between the two. Implications for the taxonomy and treatment of mood and anxiety disorders for children and adolescents are discussed.
Tomographic iterative reconstruction of a passive scalar in a 3D turbulent flow
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Kylling, Arve; Cassiani, Massimo; Solveig Dinger, Anne; Stebel, Kerstin; Schmidbauer, Norbert; Stohl, Andreas
2017-04-01
Turbulence in stable planetary boundary layers often encountered in high latitudes influences the exchange fluxes of heat, momentum, water vapor and greenhouse gases between the Earth's surface and the atmosphere. In climate and meteorological models, such effects of turbulence need to be parameterized, ultimately based on experimental data. A novel experimental approach is being developed within the COMTESSA project in order to study turbulence statistics at high resolution. Using controlled tracer releases, high-resolution camera images and estimates of the background radiation, different tomographic algorithms can be applied in order to obtain time series of 3D representations of the scalar dispersion. In this preliminary work, using synthetic data, we investigate different reconstruction algorithms with emphasis on algebraic methods. We study the dependence of the reconstruction quality on the discretization resolution and the geometry of the experimental device in both 2 and 3-D cases. We assess the computational aspects of the iterative algorithms focusing of the phenomenon of semi-convergence applying a variety of stopping rules. We discuss different strategies for error reduction and regularization of the ill-posed problem.
Real-time movement detection and analysis for video surveillance applications
NASA Astrophysics Data System (ADS)
Hueber, Nicolas; Hennequin, Christophe; Raymond, Pierre; Moeglin, Jean-Pierre
2014-06-01
Pedestrian movement along critical infrastructures like pipes, railways or highways, is of major interest in surveillance applications as well as its behavior in urban environment. The goal is to anticipate illicit or dangerous human activities. For this purpose, we propose an all-in-one small autonomous system which delivers high level statistics and reports alerts in specific cases. This situational awareness project leads us to manage efficiently the scene by performing movement analysis. A dynamic background extraction algorithm is developed to reach the degree of robustness against natural and urban environment perturbations and also to match the embedded implementation constraints. When changes are detected in the scene, specific patterns are applied to detect and highlight relevant movements. Depending on the applications, specific descriptors can be extracted and fused in order to reach a high level of interpretation. In this paper, our approach is applied to two operational use cases: pedestrian urban statistics and railway surveillance. In the first case, a grid of prototypes is deployed over a city centre to collect pedestrian movement statistics up to a macroscopic level of analysis. The results demonstrate the relevance of the delivered information; in particular, the flow density map highlights pedestrian preferential paths along the streets. In the second case, one prototype is set next to high speed train tracks to secure the area. The results exhibit a low false alarm rate and assess our approach of a large sensor network for delivering a precise operational picture without overwhelming a supervisor.
NASA Astrophysics Data System (ADS)
Mosher, J.; Kaplan, L. A.; Kan, J.; Findlay, R. H.; Podgorski, D. C.; McKenna, A. M.; Branan, T. L.; Griffith, C.
2013-12-01
The River Continuum Concept (RCC), an early meta-ecosystem idea, was developed without the benefit of new frontiers in molecular microbial ecology and ultra-high resolution mass spectrometry. We have applied technical advances in these areas to address a hypothesis implicit in the RCC that the upstream legacy of DOM processing contributes to the structure and function of downstream bacterial communities. DOM molecular structure and microbial community structure were measured across river networks within three distinct forested catchments. High-throughput pyrosequencing of bacterial 16S rRNA amplicons and phospholipid fatty acid analysis were used to characterize bacterial communities, and ultra-high resolution Fourier transform ion cyclotron resonance mass spectrometry characterized the molecular composition of stream water DOM. Total microbial biomass varied among river networks but showed a trend of decreasing biomass in sediment with increasing stream order. There were distinct shifts in bacterial community structure and a trend of decreasing richness was observed traveling downstream in both sediment and epilithic habitats. The bacterial richness in the first order stream sediment habitats was 7728 genera which decreased to 6597 genera in the second order sites and 4867 genera in the third order streams. The richness in the epilithic biofilm habitats was 2830 genera in the first order, 2322 genera in the second order and 1629 genera in the third order sites. Over 45% of the sediment biofilm genera and 37% of the epilithic genera were found in all three orders. In addition to shifts in bacterial richness, we observed a longitudinal shift in bacterial functional-types. In the sediment biofilms, Rhodoplanes spp. (containing rhodopsin pigment) and Bradyrhizobium spp. (nitrogen fixing bacteria) were predominately found in the heavily forested first order streams, while the cyanobacteria Limnothrix spp. was dominant in the second order streams. The third order streams had higher abundances of Sphingomonadaceae spp. and Nordella spp. (both Alphaproteobacteria). The cyanobacteria Chamaesiphon spp. was observed in highest abundance in the first and second order streams of the rock biofilm samples and the cyanobacteria Oscillatoria spp. was in highest abundance in the third order streams. Stream water samples from all orders had high lignin/tannin content and were enriched with carboxylic-rich alicyclic molecules (CRAM). There was an observable shift in in the molecular weight and relative abundance of the CRAM molecules with the CRAM molecules becoming less abundant and having lower molecular weight following the downstream gradient. Multivariate statistical analyses correlated the longitudinal patterns of changes in bacterial community structure to the DOM molecular structure and geochemical parameters across the river continuum.
Parallel auto-correlative statistics with VTK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
2013-08-01
This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Mugnai, Alberto; Cooper, Harry J.; Tripoli, Gregory J.; Xiang, Xuwu
1992-01-01
The relationship between emerging microwave brightness temperatures (T(B)s) and vertically distributed mixtures of liquid and frozen hydrometeors was investigated, using a cloud-radiation model, in order to establish the framework for a hybrid statistical-physical rainfall retrieval algorithm. Although strong relationships were found between the T(B) values and various rain parameters, these correlations are misleading in that the T(B)s are largely controlled by fluctuations in the ice-particle mixing ratios, which in turn are highly correlated to fluctuations in liquid-particle mixing ratios. However, the empirically based T(B)-rain-rate (T(B)-RR) algorithms can still be used as tools for estimating precipitation if the hydrometeor profiles used for T(B)-RR algorithms are not specified in an ad hoc fashion.
Using the MCNP Taylor series perturbation feature (efficiently) for shielding problems
NASA Astrophysics Data System (ADS)
Favorite, Jeffrey
2017-09-01
The Taylor series or differential operator perturbation method, implemented in MCNP and invoked using the PERT card, can be used for efficient parameter studies in shielding problems. This paper shows how only two PERT cards are needed to generate an entire parameter study, including statistical uncertainty estimates (an additional three PERT cards can be used to give exact statistical uncertainties). One realistic example problem involves a detailed helium-3 neutron detector model and its efficiency as a function of the density of its high-density polyethylene moderator. The MCNP differential operator perturbation capability is extremely accurate for this problem. A second problem involves the density of the polyethylene reflector of the BeRP ball and is an example of first-order sensitivity analysis using the PERT capability. A third problem is an analytic verification of the PERT capability.
Stimulated Electronic X-Ray Raman Scattering
NASA Astrophysics Data System (ADS)
Weninger, Clemens; Purvis, Michael; Ryan, Duncan; London, Richard A.; Bozek, John D.; Bostedt, Christoph; Graf, Alexander; Brown, Gregory; Rocca, Jorge J.; Rohringer, Nina
2013-12-01
We demonstrate strong stimulated inelastic x-ray scattering by resonantly exciting a dense gas target of neon with femtosecond, high-intensity x-ray pulses from an x-ray free-electron laser (XFEL). A small number of lower energy XFEL seed photons drive an avalanche of stimulated resonant inelastic x-ray scattering processes that amplify the Raman scattering signal by several orders of magnitude until it reaches saturation. Despite the large overall spectral width, the internal spiky structure of the XFEL spectrum determines the energy resolution of the scattering process in a statistical sense. This is demonstrated by observing a stochastic line shift of the inelastically scattered x-ray radiation. In conjunction with statistical methods, XFELs can be used for stimulated resonant inelastic x-ray scattering, with spectral resolution smaller than the natural width of the core-excited, intermediate state.
Statistical Entropy of Vaidya-de Sitter Black Hole to All Orders in Planck Length
NASA Astrophysics Data System (ADS)
Sun, HangBin; He, Feng; Huang, Hai
2012-06-01
Considering corrections to all orders in Planck length on the quantum state density from generalized uncertainty principle, we calculate the statistical entropy of scalar field near event horizon and cosmological horizon of Vaidya-de Sitter black hole without any artificial cutoff. It is shown that the entropy is linear sum of event horizon area and cosmological horizon area and there are similar proportional parameters related to changing rate of the horizon position. This is different from the static and stationary cases.
Textural content in 3T MR: an image-based marker for Alzheimer's disease
NASA Astrophysics Data System (ADS)
Bharath Kumar, S. V.; Mullick, Rakesh; Patil, Uday
2005-04-01
In this paper, we propose a study, which investigates the first-order and second-order distributions of T2 images from a magnetic resonance (MR) scan for an age-matched data set of 24 Alzheimer's disease and 17 normal patients. The study is motivated by the desire to analyze the brain iron uptake in the hippocampus of Alzheimer's patients, which is captured by low T2 values. Since, excess iron deposition occurs locally in certain regions of the brain, we are motivated to investigate the spatial distribution of T2, which is captured by higher-order statistics. Based on the first-order and second-order distributions (involving gray level co-occurrence matrix) of T2, we show that the second-order statistics provide features with sensitivity >90% (at 80% specificity), which in turn capture the textural content in T2 data. Hence, we argue that different texture characteristics of T2 in the hippocampus for Alzheimer's and normal patients could be used as an early indicator of Alzheimer's disease.
NASA Astrophysics Data System (ADS)
Keyser, A.; Westerling, A. L.; Jones, G.; Peery, M. Z.
2017-12-01
Sierra Nevada forests have experienced an increase in very large fires with significant areas of high burn severity, such as the Rim (2013) and King (2014) fires, that have impacted habitat of endangered species such as the California spotted owl. In order to support land manager forest management planning and risk assessment activities, we used historical wildfire histories from the Monitoring Trends in Burn Severity project and gridded hydroclimate and land surface characteristics data to develope statistical models to simulate the frequency, location and extent of high severity burned area in Sierra Nevada forest wildfires as functions of climate and land surface characteristics. We define high severity here as BA90 area: the area comprising patches with ninety percent or more basal area killed within a larger fire. We developed a system of statistical models to characterize the probability of large fire occurrence, the probability of significant BA90 area present given a large fire, and the total extent of BA90 area in a fire on a 1/16 degree lat/lon grid over the Sierra Nevada. Repeated draws from binomial and generalized pareto distributions using these probabilities generated a library of simulated histories of high severity fire for a range of near (50 yr) future climate and fuels management scenarios. Fuels management scenarios were provided by USFS Region 5. Simulated BA90 area was then downscaled to 30 m resolution using a statistical model we developed using Random Forest techniques to estimate the probability of adjacent 30m pixels burning with ninety percent basal kill as a function of fire size and vegetation and topographic features. The result is a library of simulated high resolution maps of BA90 burned areas for a range of climate and fuels management scenarios with which we estimated conditional probabilities of owl nesting sites being impacted by high severity wildfire.
Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.
Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I
2018-06-26
The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.
Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven
2012-01-01
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921
Advances in high-resolution mass spectrometry based on metabolomics studies for food--a review.
Rubert, Josep; Zachariasova, Milena; Hajslova, Jana
2015-01-01
Food authenticity becomes a necessity for global food policies, since food placed in the market without fail has to be authentic. It has always been a challenge, since in the past minor components, called also markers, have been mainly monitored by chromatographic methods in order to authenticate the food. Nevertheless, nowadays, advanced analytical methods have allowed food fingerprints to be achieved. At the same time they have been also combined with chemometrics, which uses statistical methods in order to verify food and to provide maximum information by analysing chemical data. These sophisticated methods based on different separation techniques or stand alone have been recently coupled to high-resolution mass spectrometry (HRMS) in order to verify the authenticity of food. The new generation of HRMS detectors have experienced significant advances in resolving power, sensitivity, robustness, extended dynamic range, easier mass calibration and tandem mass capabilities, making HRMS more attractive and useful to the food metabolomics community, therefore becoming a reliable tool for food authenticity. The purpose of this review is to summarise and describe the most recent metabolomics approaches in the area of food metabolomics, and to discuss the strengths and drawbacks of the HRMS analytical platforms combined with chemometrics.
Pharmacokinetics of doxylamine, a component of Bendectin, in the rhesus monkey.
Slikker, W; Holder, C L; Lipe, G W; Bailey, J R; Young, J F
1989-01-01
The elimination of doxylamine and metabolites was determined after iv administration of [14C]doxylamine succinate at 0.7 and 13.3 mg/kg to the adult female rhesus monkey. Although the total recovery of radioactivity was the same for the low- and high-dose studies (90.2%), the rate of plasma elimination of doxylamine and its demethylated metabolite (desmethyldoxylamine) was slower for the high dose group. The 24 hr urinary excretion of doxylamine metabolites, desmethyl- and didesmethyldoxylamine, was significantly increased and the polar doxylamine metabolites were significantly decreased as the iv doxylamine succinate dose was increased. The plasma elimination of gas chromatograph (GC)-detected doxylamine was determined after oral administration of Bendectin (doxylamine succinate and pyridoxine hydrochloride) at 7, 13.3, and 27 mg/kg to adult female rhesus monkeys. As the dose increased, the clearance of doxylamine decreased. A statistically evaluated fit of the oral data to a single-compartment, parallel first-order elimination model and a single-compartment, parallel first- and second-order (Michaelis-Menten) elimination model indicated that the more complex model containing the second-order process was most consistent with the observed elimination data.
NASA Technical Reports Server (NTRS)
Pan, Jianqiang
1992-01-01
Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.
The 1993 Mississippi river flood: A one hundred or a one thousand year event?
Malamud, B.D.; Turcotte, D.L.; Barton, C.C.
1996-01-01
Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
Application of microarray analysis on computer cluster and cloud platforms.
Bernau, C; Boulesteix, A-L; Knaus, J
2013-01-01
Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.
Multilinear Graph Embedding: Representation and Regularization for Images.
Chen, Yi-Lei; Hsu, Chiou-Ting
2014-02-01
Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.
Endahl, Lars A; Utzon, Jan
2002-09-16
It is well known that publication of hospital quality indicators may lead to improving of treatments. But the publication can also have some negative side effects: Focus may shift to the evaluated areas at the expense of non-evaluated areas. The most ill patients may be sorted out and high risk patients may be transferred to other hospitals or discharged in order to avoid their dying during hospitalisation and improve statistics. Overestimation of patient risk in order to improve relative treatment outcome. Increasing flow of patients to hospitals with high scores on quality indicators may cause imbalance between activities and budgets and hence longer waiting times and reduced quality of treatment. Negative publicity due to low scores on quality indicators may lead to under-utilisation of hospital capacity, patient and staff insecurity and staff wastage. Thus, publication of quality indicators may improve quality within the health sector, but it is very important to recognise potential pitfalls and negative side effects.
Spatially distributed fiber sensor with dual processed outputs
NASA Astrophysics Data System (ADS)
Xu, X.; Spillman, William B., Jr.; Claus, Richard O.; Meissner, K. E.; Chen, K.
2005-05-01
Given the rapid aging of the world"s population, improvements in technology for automation of patient care and documentation are badly needed. We have previously demonstrated a 'smart bed' that can non-intrusively monitor a patient in bed and determine a patient's respiration, heart rate and movement without intrusive or restrictive medical measurements. This is an application of spatially distributed integrating fiber optic sensors. The basic concept is that any patient movement that also moves an optical fiber within a specified area will produce a change in the optical signal. Two modal modulation approaches were considered, a statistical mode (STM) sensor and a high order mode excitation (HOME) sensor. The present design includes an STM sensor combined with a HOME sensor, using both modal modulation approaches. A special lens system allows only the high order modes of the optical fiber to be excited and coupled into the sensor. For handling output from the dual STM-HOME sensor, computer processing methods are discussed that offer comprehensive perturbation analysis for more reliable patient monitoring.
Statistical analysis and isotherm study of uranium biosorption by Padina sp. algae biomass.
Khani, Mohammad Hassan
2011-06-01
The application of response surface methodology is presented for optimizing the removal of U ions from aqueous solutions using Padina sp., a brown marine algal biomass. Box-Wilson central composite design was employed to assess individual and interactive effects of the four main parameters (pH and initial uranium concentration in solutions, contact time and temperature) on uranium uptake. Response surface analysis showed that the data were adequately fitted to second-order polynomial model. Analysis of variance showed a high coefficient of determination value (R (2)=0.9746) and satisfactory second-order regression model was derived. The optimum pH and initial uranium concentration in solutions, contact time and temperature were found to be 4.07, 778.48 mg/l, 74.31 min, and 37.47°C, respectively. Maximized uranium uptake was predicted and experimentally validated. The equilibrium data for biosorption of U onto the Padina sp. were well represented by the Langmuir isotherm, giving maximum monolayer adsorption capacity as high as 376.73 mg/g.
NASA Astrophysics Data System (ADS)
Palenta, Theresia; Fuhrmann, Sindy; Greaves, G. Neville; Schwieger, Wilhelm; Wondraczek, Lothar
2015-02-01
We examine the route of structural collapse and re-crystallization of faujasite-type (Na,K)-LSX zeolite. As the first step, a rather stable amorphous high density phase HDAcollapse is generated through an order-disorder transition from the original zeolite via a low density phase LDAcollapse, at around 790 °C. We find that the overall amorphization is driven by an increase in the bond angle distribution within T-O-T and a change in ring statistics to 6-membered TO4 (T = Si4+, Al3+) rings at the expense of 4-membered rings. The HDAamorph transforms into crystalline nepheline, though, through an intermediate metastable carnegieite phase. In comparison, the melt-derived glass of similar composition, HDAMQ, crystallizes directly into the nepheline phase without the occurrence of intermediate carnegieite. This is attributed to the higher structural order of the faujasite-derived HDAcollapse which prefers the re-crystallization into the highly symmetric carnegieite phase before transformation into nepheline with lower symmetry.
An Overview of Interrater Agreement on Likert Scales for Researchers and Practitioners
O'Neill, Thomas A.
2017-01-01
Applications of interrater agreement (IRA) statistics for Likert scales are plentiful in research and practice. IRA may be implicated in job analysis, performance appraisal, panel interviews, and any other approach to gathering systematic observations. Any rating system involving subject-matter experts can also benefit from IRA as a measure of consensus. Further, IRA is fundamental to aggregation in multilevel research, which is becoming increasingly common in order to address nesting. Although, several technical descriptions of a few specific IRA statistics exist, this paper aims to provide a tractable orientation to common IRA indices to support application. The introductory overview is written with the intent of facilitating contrasts among IRA statistics by critically reviewing equations, interpretations, strengths, and weaknesses. Statistics considered include rwg, rwg*, r′wg, rwg(p), average deviation (AD), awg, standard deviation (Swg), and the coefficient of variation (CVwg). Equations support quick calculation and contrasting of different agreement indices. The article also includes a “quick reference” table and three figures in order to help readers identify how IRA statistics differ and how interpretations of IRA will depend strongly on the statistic employed. A brief consideration of recommended practices involving statistical and practical cutoff standards is presented, and conclusions are offered in light of the current literature. PMID:28553257
NASA Astrophysics Data System (ADS)
Mulligan, T.; Blake, J.; Spence, H. E.; Jordan, A. P.; Shaul, D.; Quenby, J.
2007-12-01
On August 20, 2006 a Forbush decrease observed at Polar in the Earth's magnetosphere was also seen at the INTEGRAL spacecraft outside the magnetosphere during a very active time in the solar wind. Data from Polar HIST and from INTEGRAL's Ge detector saturation rate (GEDSAT), which measures the GCR background with a threshold of ~200 MeV, show similar, short-period GCR variations in and around the Forbush decrease. The solar wind magnetic field and plasma conditions during this time reveals three interplanetary shocks present in the days leading up to and including the Forbush decrease. The first two shocks are driven by interplanetary coronal mass ejections (ICMEs) and the last one by a high-speed stream. However, the solar wind following these shocks and during the Forbush decrease is not particularly geoeffective. The Forbush decrease, which begins at ~1200 UT on August 20, 2006 is the largest intensity change during this active time, but there are many others on a variety of timescales. Looking at more than 14 consecutive hours of INTEGRAL and Polar data on August 21, 2006 shows great similarities in the time history of the measurements made aboard the two satellites coupled with differences that must be due to GCR variability on a scale size of the order or less than their separation distance. Despite the spacecraft separation of over 25 Re, many of the larger intensity fluctuations remain identical at both satellites. Autocorrelation and power spectral analyses have shown these are not ar-n processes and that these fluctuations are statistically significant. Such analyses can be done with high confidence because both detectors aboard Polar and INTEGRAL have large geometric factors that generate high count rates on the order of 1000 particles per spin, ensuring rigorous, statistically significant samples.
A STATISTICAL THERMODYNAMIC MODEL OF THE ORGANIZATIONAL ORDER OF VEGETATION. (R827676)
The complex pattern of vegetation is the macroscopic manifestation of biological diversity and the ecological order in space and time. How is this overwhelmingly diverse, yet wonderfully ordered spatial pattern formed, and how does it evolve? To answer these questions, most tr...
Statistical Entropy of Dirac Field Outside RN Black Hole and Modified Density Equation
NASA Astrophysics Data System (ADS)
Cao, Fei; He, Feng
2012-02-01
Statistical entropy of Dirac field in Reissner-Nordstrom black hole space-time is computed by state density equation corrected by the generalized uncertainty principle to all orders in Planck length and WKB approximation. The result shows that the statistical entropy is proportional to the horizon area but the present result is convergent without any artificial cutoff.
The Shock and Vibration Digest. Volume 16, Number 1
1984-01-01
investigation of the measure- ment of frequency band average loss factors of structural components for use in the statistical energy analysis method of...stiffness. Matrix methods Key Words: Finite element technique. Statistical energy analysis . Experimental techniques. Framed structures, Com- puter...programs In order to further understand the practical application of the statistical energy analysis , a two section plate-like frame structure is
Out-of-time-order fluctuation-dissipation theorem
NASA Astrophysics Data System (ADS)
Tsuji, Naoto; Shitara, Tomohiro; Ueda, Masahito
2018-01-01
We prove a generalized fluctuation-dissipation theorem for a certain class of out-of-time-ordered correlators (OTOCs) with a modified statistical average, which we call bipartite OTOCs, for general quantum systems in thermal equilibrium. The difference between the bipartite and physical OTOCs defined by the usual statistical average is quantified by a measure of quantum fluctuations known as the Wigner-Yanase skew information. Within this difference, the theorem describes a universal relation between chaotic behavior in quantum systems and a nonlinear-response function that involves a time-reversed process. We show that the theorem can be generalized to higher-order n -partite OTOCs as well as in the form of generalized covariance.
Matoz-Fernandez, D A; Linares, D H; Ramirez-Pastor, A J
2012-09-04
The statistical thermodynamics of straight rigid rods of length k on triangular lattices was developed on a generalization in the spirit of the lattice-gas model and the classical Guggenheim-DiMarzio approximation. In this scheme, the Helmholtz free energy and its derivatives were written in terms of the order parameter, δ, which characterizes the nematic phase occurring in the system at intermediate densities. Then, using the principle of minimum free energy with δ as a parameter, the main adsorption properties were calculated. Comparisons with Monte Carlo simulations and experimental data were performed in order to evaluate the outcome and limitations of the theoretical model.
Compressible Boundary Layer Predictions at High Reynolds Number using Hybrid LES/RANS Methods
NASA Technical Reports Server (NTRS)
Choi, Jung-Il; Edwards, Jack R.; Baurle, Robert A.
2008-01-01
Simulations of compressible boundary layer flow at three different Reynolds numbers (Re(sub delta) = 5.59x10(exp 4), 1.78x10(exp 5), and 1.58x10(exp 6) are performed using a hybrid large-eddy/Reynolds-averaged Navier-Stokes method. Variations in the recycling/rescaling method, the higher-order extension, the choice of primitive variables, the RANS/LES transition parameters, and the mesh resolution are considered in order to assess the model. The results indicate that the present model can provide good predictions of the mean flow properties and second-moment statistics of the boundary layers considered. Normalized Reynolds stresses in the outer layer are found to be independent of Reynolds number, similar to incompressible turbulent boundary layers.
Independent component analysis for automatic note extraction from musical trills
NASA Astrophysics Data System (ADS)
Brown, Judith C.; Smaragdis, Paris
2004-05-01
The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.
Simultaneous calibration of ensemble river flow predictions over an entire range of lead times
NASA Astrophysics Data System (ADS)
Hemri, S.; Fundel, F.; Zappa, M.
2013-10-01
Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.
Spin Dynamics in the electron-doped high-Tc superconductors Pr0.88LaCe0.12CuO4-δ
NASA Astrophysics Data System (ADS)
Dai, Pengcheng
2007-03-01
We briefly review results of recent neutron scattering experiments designed to probe the evolution of antiferromagnetic (AF) order and spin dynamics in the electron- doped Pr0.88LaCe0.12CuO4-δ (PLCCO) as the system is tuned from its as-grown non-superconducting AF state into an optimally doped superconductor (Tc = 27.5 K) without static AF order [1-3]. For under doped materials, a quasi-two- dimensional spin-density wave was found to coexist with three- dimensional AF order and superconductivity. In addition, the low-energy spin excitations follow Bose statistics. In the case of optimally doped material, we have discovered a magnetic resonance intimately related to superconductivity analogous to the resonance in hole-doped materials. On the other hand, the low energy spin excitations have very weak temperature dependence and do not follow Bose statistics, in sharp contrast to the as-grown nonsuperconducting materials. 1 Stephen D. Wilson, Pengcheng Dai, Shiliang Li, Songxue Chi, H. J. Kang, and J. W. Lynn, Nature (London) 442, 59 (2006). 2 Stephen D. Wilson, Shiliang Li, Hyungje Woo, Pengcheng Dai, H. A. Mook, C. D. Frost, S. Komiya, and Y. Ando, Phys. Rev. Lett. 96, 157001 (2006). 3. Stephen D. Wilson, Shiliang Li, Pengcheng Dai, Wei Bao, J. H. Chung, H. J. Kang, S.-H. Lee, S. Komiya, and Y. Ando, Phys. Rev. B 74, 144514 (2006).
NASA Astrophysics Data System (ADS)
Fabian, Karl; Thomas, Christopher I.; McEnroe, Suzanne A.; Robinson, Peter; Mukai, Hiroki
2013-04-01
The ilmenite-hematite solid solution series xFeTiO3-(1 - x)Fe2O3 can generate extremely unusual magnetic properties in natural rocks and has been investigated for more than fifty years. Both, ilmenite (FeTiO3) and hematite (Fe2O3) are antiferromagnetic, but intermediate compositions are either antiferromagnetic or ferrimagnetic, depending on their chemical order. Within a single sample, nano-scale variations in local composition x and ordering state Q depend on minute details of the cooling and annealing history, and have large effects on the magnetic properties, which include self-reversal of thermoremanent magnetization and large exchange bias. We present a systematic study of magnetic properties of samples in the composition range of 0.6 ˜ x ˜ 0.7 with differing nanostructure and consequently differing magnetic properties. Using high-field measurements up to 7 T, together with TEM images and theoretical models we classify nanostructure formation in terms of x, Q, and characteristic size d. These characteristics are then linked to the magnetic properties. The sample characterization relies on average mean-field models of Ms(T). To implement the varying Fe and Ti densities, and the distribution of Fe ions in the variably ordered solid solutions, the models either use statistical interactions between sites, whereby they effectively average over all possible configurations, or they describe specific random configurations. Statistical mean field models are successful in predicting the Curie temperatures TC and Ms(T) curves of the Ilmx solid solutions. The results depend on the interaction coefficients, which either had been determined by neutron diffraction measurements (Samuelson and Shirane, 1979), by Monte-Carlo model fits (Harrison, 2006), or by density-functional theoretic calculations (Nabi et al. 2010). Hysteresis branches have been measured for a wide variety of samples at different temperatures 40 K, 100 K and 300 K. None of them saturate at 7 T, the strongest field available to us so far. Some of the samples show the beginnings of a pseudo-metamagnetic transition at the upper limits of the measurements. In previous models this is explained by anti-phase boundaries and exchange coupling between ordered and disordered regions with differing sizes and hence differing responses to an external field. These effects will be studied further up to 60 T using a European high-field laboratory within the EuroMagNET II/EMFL scheme.
SEPEM: A tool for statistical modeling the solar energetic particle environment
NASA Astrophysics Data System (ADS)
Crosby, Norma; Heynderickx, Daniel; Jiggens, Piers; Aran, Angels; Sanahuja, Blai; Truscott, Pete; Lei, Fan; Jacobs, Carla; Poedts, Stefaan; Gabriel, Stephen; Sandberg, Ingmar; Glover, Alexi; Hilgers, Alain
2015-07-01
Solar energetic particle (SEP) events are a serious radiation hazard for spacecraft as well as a severe health risk to humans traveling in space. Indeed, accurate modeling of the SEP environment constitutes a priority requirement for astrophysics and solar system missions and for human exploration in space. The European Space Agency's Solar Energetic Particle Environment Modelling (SEPEM) application server is a World Wide Web interface to a complete set of cross-calibrated data ranging from 1973 to 2013 as well as new SEP engineering models and tools. Both statistical and physical modeling techniques have been included, in order to cover the environment not only at 1 AU but also in the inner heliosphere ranging from 0.2 AU to 1.6 AU using a newly developed physics-based shock-and-particle model to simulate particle flux profiles of gradual SEP events. With SEPEM, SEP peak flux and integrated fluence statistics can be studied, as well as durations of high SEP flux periods. Furthermore, effects tools are also included to allow calculation of single event upset rate and radiation doses for a variety of engineering scenarios.
Bootstrapping under constraint for the assessment of group behavior in human contact networks
NASA Astrophysics Data System (ADS)
Tremblay, Nicolas; Barrat, Alain; Forest, Cary; Nornberg, Mark; Pinton, Jean-François; Borgnat, Pierre
2013-11-01
The increasing availability of time- and space-resolved data describing human activities and interactions gives insights into both static and dynamic properties of human behavior. In practice, nevertheless, real-world data sets can often be considered as only one realization of a particular event. This highlights a key issue in social network analysis: the statistical significance of estimated properties. In this context, we focus here on the assessment of quantitative features of specific subset of nodes in empirical networks. We present a method of statistical resampling based on bootstrapping groups of nodes under constraints within the empirical network. The method enables us to define acceptance intervals for various null hypotheses concerning relevant properties of the subset of nodes under consideration in order to characterize by a statistical test its behavior as “normal” or not. We apply this method to a high-resolution data set describing the face-to-face proximity of individuals during two colocated scientific conferences. As a case study, we show how to probe whether colocating the two conferences succeeded in bringing together the two corresponding groups of scientists.
Effect of shock waves on the statistics and scaling in compressible isotropic turbulence
NASA Astrophysics Data System (ADS)
Wang, Jianchun; Wan, Minping; Chen, Song; Xie, Chenyue; Chen, Shiyi
2018-04-01
The statistics and scaling of compressible isotropic turbulence in the presence of large-scale shock waves are investigated by using numerical simulations at turbulent Mach number Mt ranging from 0.30 to 0.65. The spectra of the compressible velocity component, density, pressure, and temperature exhibit a k-2 scaling at different turbulent Mach numbers. The scaling exponents for structure functions of the compressible velocity component and thermodynamic variables are close to 1 at high orders n ≥3 . The probability density functions of increments of the compressible velocity component and thermodynamic variables exhibit a power-law region with the exponent -2 . Models for the conditional average of increments of the compressible velocity component and thermodynamic variables are developed based on the ideal shock relations and are verified by numerical simulations. The overall statistics of the compressible velocity component and thermodynamic variables are similar to one another at different turbulent Mach numbers. It is shown that the effect of shock waves on the compressible velocity spectrum and kinetic energy transfer is different from that of acoustic waves.
NASA Astrophysics Data System (ADS)
Müller, Thomas; Schütze, Manfred; Bárdossy, András
2017-09-01
A property of natural processes is temporal irreversibility. However, this property cannot be reflected by most statistics used to describe precipitation time series and, consequently, is not considered in most precipitation models. In this paper, a new statistic, the asymmetry measure, is introduced and applied to precipitation enabling to detect and quantify irreversibility. It is used to analyze two different data sets of Singapore and Germany. The data of both locations show a significant asymmetry for high temporal resolutions. The asymmetry is more pronounced for Singapore where the climate is dominated by convective precipitation events. The impact of irreversibility on applications is analyzed on two different hydrological sewer system models. The results show that the effect of the irreversibility can lead to biases in combined sewer overflow statistics. This bias is in the same order as the effect that can be achieved by real time control of sewer systems. Consequently, wrong conclusion can be drawn if synthetic time series are used for sewer systems if asymmetry is present, but not considered in precipitation modeling.
Tian, Zengshan; Xu, Kunjie; Yu, Xiang
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349
Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.
On a simple molecular–statistical model of a liquid-crystal suspension of anisometric particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakhlevnykh, A. N., E-mail: anz@psu.ru; Lubnin, M. S.; Petrov, D. A.
2016-11-15
A molecular–statistical mean-field theory is constructed for suspensions of anisometric particles in nematic liquid crystals (NLCs). The spherical approximation, well known in the physics of ferromagnetic materials, is considered that allows one to obtain an analytic expression for the free energy and simple equations for the orientational state of a suspension that describe the temperature dependence of the order parameters of the suspension components. The transition temperature from ordered to isotropic state and the jumps in the order parameters at the phase-transition point are studied as a function of the anchoring energy of dispersed particles to the matrix, the concentrationmore » of the impurity phase, and the size of particles. The proposed approach allows one to generalize the model to the case of biaxial ordering.« less
A statistical analysis of cervical auscultation signals from adults with unsafe airway protection.
Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin
2016-01-22
Aspiration, where food or liquid is allowed to enter the larynx during a swallow, is recognized as the most clinically salient feature of oropharyngeal dysphagia. This event can lead to short-term harm via airway obstruction or more long-term effects such as pneumonia. In order to non-invasively identify this event using high resolution cervical auscultation there is a need to characterize cervical auscultation signals from subjects with dysphagia who aspirate. In this study, we collected swallowing sound and vibration data from 76 adults (50 men, 26 women, mean age 62) who underwent a routine videofluoroscopy swallowing examination. The analysis was limited to swallows of liquid with either thin (<5 cps) or viscous (≈300 cps) consistency and was divided into those with deep laryngeal penetration or aspiration (unsafe airway protection), and those with either shallow or no laryngeal penetration (safe airway protection), using a standardized scale. After calculating a selection of time, frequency, and time-frequency features for each swallow, the safe and unsafe categories were compared using Wilcoxon rank-sum statistical tests. Our analysis found that few of our chosen features varied in magnitude between safe and unsafe swallows with thin swallows demonstrating no statistical variation. We also supported our past findings with regard to the effects of sex and the presence or absence of stroke on cervical ausculation signals, but noticed certain discrepancies with regards to bolus viscosity. Overall, our results support the necessity of using multiple statistical features concurrently to identify laryngeal penetration of swallowed boluses in future work with high resolution cervical auscultation.
Wireless majorana fermions: from magnetic tunability to braiding (Conference Presentation)
NASA Astrophysics Data System (ADS)
Fatin, Geoffrey L.; Matos-Abiague, Alex; Scharf, Benedikt; Zutic, Igor
2016-10-01
In condensed-matter systems Majorana bound states (MBSs) are emergent quasiparticles with non-Abelian statistics and particle-antiparticle symmetry. While realizing the non-Abelian braiding statistics under exchange would provide both an ultimate proof for MBS existence and the key element for fault-tolerant topological quantum computing, even theoretical schemes imply a significant complexity to implement such braiding. Frequently examined 1D superconductor/semiconductor wires provide a prototypical example of how to produce MBSs, however braiding statistics are ill-defined in 1D and complex wire networks must be used. By placing an array of magnetic tunnel junctions (MTJs) above a 2D electron gas formed in a semiconductor quantum well grown on the surface of an s-wave superconductor, we have predicted the existence of highly tunable zero-energy MBSs and have proposed a novel scheme by which MBSs could be exchanged [1]. This scheme may then be used to demonstrate the states' non-Abelian statistics through braiding. The underlying magnetic textures produced by MTJ array provides a pseudo-helical texture which allows for highly-controllable topological phase transitions. By defining a local condition for topological nontriviality which takes into account the local rotation of magnetic texture, effective wire geometries support MBS formation and permit their controlled movement in 2D by altering the shape and orientation of such wires. This scheme then overcomes the requirement for a network of physical wires in order to exchange MBSs, allowing easier manipulation of such states. [1] G. L. Fatin, A. Matos-Abiague, B. Scharf, and I. Zutic, arXiv:1510.08182, preprint.
NASA Astrophysics Data System (ADS)
Sun, Taohua; Zhang, Xinhui; Miao, Ying; Zhou, Yang; Shi, Jie; Yan, Meixing; Chen, Anjin
2018-06-01
The antiviral activity in vitro and in vivo and the effect of the immune system of two fucoidan fractions with low molecular weight and different sulfate content from Laminaria japonica (LMW fucoidans) were investigated in order to examine the possible mechanism. In vitro, I-type influenza virus, adenovirus and Parainfluenza virus I were used to infect Hep-2, Hela and MDCK cells, respectively. And 50% tissue culture infective dose was calculated to detect the antiviral activity of two LMW fucoidans. The results indicated that compared with the control group, 2 kinds of LMW fucoidans had remarkable antiviral activity in vitro in middle and high doses, while at low doses, the antiviral activity of 2 kinds of LMW fucoidans was not statistically different from that in the blank control group. And there was no statistically difference between two LMW fucoidans in antiviral activity. In vivo, LMW fucoidans could prolong the survival time of virus-infected mice, and could improve the lung index of virus-infected mice significantly, which have statistical differences with the control group significantly ( p < 0.01). However, the survival time of the two LMW fucoidans was not statistically significant ( p > 0.05). In this study, it was shown that both of two LMW fucoidans (LF1, LF2) could increase the thymus index, spleen index, phagocytic index, phagocytosis coefficient and half hemolysin value in middle and high doses, which suggested that LMW fucoidans could play an antiviral role by improving the quality of immune organs, improving immune cell phagocytosis and humoral immunity.
Hybrid perturbation methods based on statistical time series models
NASA Astrophysics Data System (ADS)
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Plis, Sergey M; Sui, Jing; Lane, Terran; Roy, Sushmita; Clark, Vincent P; Potluru, Vamsi K; Huster, Rene J; Michael, Andrew; Sponheim, Scott R; Weisend, Michael P; Calhoun, Vince D
2013-01-01
Identifying the complex activity relationships present in rich, modern neuroimaging data sets remains a key challenge for neuroscience. The problem is hard because (a) the underlying spatial and temporal networks may be nonlinear and multivariate and (b) the observed data may be driven by numerous latent factors. Further, modern experiments often produce data sets containing multiple stimulus contexts or tasks processed by the same subjects. Fusing such multi-session data sets may reveal additional structure, but raises further statistical challenges. We present a novel analysis method for extracting complex activity networks from such multifaceted imaging data sets. Compared to previous methods, we choose a new point in the trade-off space, sacrificing detailed generative probability models and explicit latent variable inference in order to achieve robust estimation of multivariate, nonlinear group factors (“network clusters”). We apply our method to identify relationships of task-specific intrinsic networks in schizophrenia patients and control subjects from a large fMRI study. After identifying network-clusters characterized by within- and between-task interactions, we find significant differences between patient and control groups in interaction strength among networks. Our results are consistent with known findings of brain regions exhibiting deviations in schizophrenic patients. However, we also find high-order, nonlinear interactions that discriminate groups but that are not detected by linear, pair-wise methods. We additionally identify high-order relationships that provide new insights into schizophrenia but that have not been found by traditional univariate or second-order methods. Overall, our approach can identify key relationships that are missed by existing analysis methods, without losing the ability to find relationships that are known to be important. PMID:23876245
Predicting Fog in the Nocturnal Boundary Layer
NASA Astrophysics Data System (ADS)
Izett, Jonathan; van de Wiel, Bas; Baas, Peter; van der Linden, Steven; van Hooft, Antoon; Bosveld, Fred
2017-04-01
Fog is a global phenomenon that presents a hazard to navigation and human safety, resulting in significant economic impacts for air and shipping industries as well as causing numerous road traffic accidents. Accurate prediction of fog events, however, remains elusive both in terms of timing and occurrence itself. Statistical methods based on set threshold criteria for key variables such as wind speed have been developed, but high rates of correct prediction of fog events still lead to similarly high "false alarms" when the conditions appear favourable, but no fog forms. Using data from the CESAR meteorological observatory in the Netherlands, we analyze specific cases and perform statistical analyses of event climatology, in order to identify the necessary conditions for correct prediction of fog. We also identify potential "missing ingredients" in current analysis that could help to reduce the number of false alarms. New variables considered include the indicators of boundary layer stability, as well as the presence of aerosols conducive to droplet formation. The poster presents initial findings of new research as well as plans for continued research.
NASA Astrophysics Data System (ADS)
Castro Rojas, María Dolores; Zuñiga, Ana Lourdes Acuña; Ugalde, Emmanuel Fonseca
2015-12-01
GLOBE is a global educational program for elementary and high school levels, and its main purpose in Costa Rica is to develop scientific thinking and interest for science in high school students through hydrology research projects that allow them to relate science with environmental issues in their communities. Youth between 12 and 17 years old from public schools participate in science clubs outside of their regular school schedule. A comparison study was performed between different groups, in order to assess GLOBE's applicability as a learning science atmosphere and the motivation and interest it generates in students toward science. Internationally applied scales were used as tools for measuring such indicators, adapted to the Costa Rican context. The results provide evidence statistically significant that the students perceive the GLOBE atmosphere as an enriched environment for science learning in comparison with the traditional science class. Moreover, students feel more confident, motivated and interested in science than their peers who do not participate in the project. However, the results were not statistically significant in this last respect.
Predictor of increase in caregiver burden for disabled elderly at home.
Okamoto, Kazushi; Harasawa, Yuko
2009-01-01
In order to classify the caregivers at high risk of increase in their burden early, linear discriminant analysis was performed to obtain an effective discriminant model for differentiation of the presence or absence of increase in caregiver burden. The data obtained by self-administered questionnaire from 193 caregivers of frail elderly from January to February of 2005 were used. The discriminant analysis yielded a statistically significant function explaining 35.0% (Rc=0.59; d.f.=6; p=0.0001). The configuration indicated that the psychological predictors of change in caregiver burden with much perceived stress (1.47), high caregiver burden at baseline (1.28), emotional control (0.75), effort to achieve (-0.28), symptomatic depression (0.20) and "ikigai" (purpose in life) (0.18) made statistically significant contributions to the differentiation between no increase and increase in caregiver burden. The discriminant function showed a sensitivity of 86% and specificity of 81%, and successfully classified 83% of the caregivers. The function at baseline is a simple and useful method for screening of an increase in caregiver burden among caregivers for the frail elderly at home.
NASA Astrophysics Data System (ADS)
Vouterakos, P. A.; Moustris, K. P.; Bartzokas, A.; Ziomas, I. C.; Nastos, P. T.; Paliatsos, A. G.
2012-12-01
In this work, artificial neural networks (ANNs) were developed and applied in order to forecast the discomfort levels due to the combination of high temperature and air humidity, during the hot season of the year, in eight different regions within the Greater Athens area (GAA), Greece. For the selection of the best type and architecture of ANNs-forecasting models, the multiple criteria analysis (MCA) technique was applied. Three different types of ANNs were developed and tested with the MCA method. Concretely, the multilayer perceptron, the generalized feed forward networks (GFFN), and the time-lag recurrent networks were developed and tested. Results showed that the best ANNs type performance was achieved by using the GFFN model for the prediction of discomfort levels due to high temperature and air humidity within GAA. For the evaluation of the constructed ANNs, appropriate statistical indices were used. The analysis proved that the forecasting ability of the developed ANNs models is very satisfactory at a significant statistical level of p < 0.01.
NASA Astrophysics Data System (ADS)
Saez, Núria; Ruiz, Xavier; Pallarés, Jordi; Shevtsova, Valentina
2013-04-01
An accelerometric record from the IVIDIL experiment (ESA Columbus module) has exhaustively been studied. The analysis involved the determination of basic statistical properties as, for instance, the auto-correlation and the power spectrum (second-order statistical analyses). Also, and taking into account the shape of the associated histograms, we address another important question, the non-Gaussian nature of the time series using the bispectrum and the bicoherence of the signals. Extrapolating the above-mentioned results, a computational model of a high-temperature shear cell has been performed. A scalar indicator has been used to quantify the accuracy of the diffusion coefficient measurements in the case of binary mixtures involving photovoltaic silicon or liquid Al-Cu binary alloys. Three different initial arrangements have been considered, the so-called interdiffusion, centred thick layer and the lateral thick layer. Results allow us to conclude that, under the conditions of the present work, the diffusion coefficient is insensitive to the environmental conditions, that is to say, accelerometric disturbances and initial shear cell arrangement.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2016-07-01
The study of socioeconomic inequality is of substantial importance, scientific and general alike. The graphic visualization of inequality is commonly conveyed by Lorenz curves. While Lorenz curves are a highly effective statistical tool for quantifying the distribution of wealth in human societies, they are less effective a tool for the visual depiction of socioeconomic inequality. This paper introduces an alternative to Lorenz curves-the hill curves. On the one hand, the hill curves are a potent scientific tool: they provide detailed scans of the rich-poor gaps in human societies under consideration, and are capable of accommodating infinitely many degrees of freedom. On the other hand, the hill curves are a powerful infographic tool: they visualize inequality in a most vivid and tangible way, with no quantitative skills that are required in order to grasp the visualization. The application of hill curves extends far beyond socioeconomic inequality. Indeed, the hill curves are highly effective 'hyperspectral' measures of statistical variability that are applicable in the context of size distributions at large. This paper establishes the notion of hill curves, analyzes them, and describes their application in the context of general size distributions.
EEG artifact removal-state-of-the-art and guidelines.
Urigüen, Jose Antonio; Garcia-Zapirain, Begoña
2015-06-01
This paper presents an extensive review on the artifact removal algorithms used to remove the main sources of interference encountered in the electroencephalogram (EEG), specifically ocular, muscular and cardiac artifacts. We first introduce background knowledge on the characteristics of EEG activity, of the artifacts and of the EEG measurement model. Then, we present algorithms commonly employed in the literature and describe their key features. Lastly, principally on the basis of the results provided by various researchers, but also supported by our own experience, we compare the state-of-the-art methods in terms of reported performance, and provide guidelines on how to choose a suitable artifact removal algorithm for a given scenario. With this review we have concluded that, without prior knowledge of the recorded EEG signal or the contaminants, the safest approach is to correct the measured EEG using independent component analysis-to be precise, an algorithm based on second-order statistics such as second-order blind identification (SOBI). Other effective alternatives include extended information maximization (InfoMax) and an adaptive mixture of independent component analyzers (AMICA), based on higher order statistics. All of these algorithms have proved particularly effective with simulations and, more importantly, with data collected in controlled recording conditions. Moreover, whenever prior knowledge is available, then a constrained form of the chosen method should be used in order to incorporate such additional information. Finally, since which algorithm is the best performing is highly dependent on the type of the EEG signal, the artifacts and the signal to contaminant ratio, we believe that the optimal method for removing artifacts from the EEG consists in combining more than one algorithm to correct the signal using multiple processing stages, even though this is an option largely unexplored by researchers in the area.
Ocular higher-order aberrations in a school children population.
Papamastorakis, George; Panagopoulou, Sophia; Tsilimbaris, Militadis K; Pallikaris, Ioannis G; Plainis, Sotiris
2015-01-01
The primary objective of the study was to explore the statistics of ocular higher-order aberrations in a population of primary and secondary school children. A sample of 557 children aged 10-15 years were selected from two primary and two secondary schools in Heraklion, Greece. Children were classified by age in three subgroups: group I (10.7±0.5 years), group II (12.4±0.5 years) and group III (14.5±0.5 years). Ocular aberrations were measured using a wavefront aberrometer (COAS, AMO Wavefront Sciences, USA) at mesopic light levels (illuminance at cornea was 4lux). Wavefront analysis was achieved for a 5mm pupil. Statistical analysis was carried out for the right eye only. The average coefficient of most high-order aberrations did not differ from zero with the exception of vertical (0.076μm) and horizontal (0.018μm) coma, oblique trefoil (-0.055μm) and spherical aberration (0.018μm). The most prominent change between the three groups was observed for the spherical aberration, which increased from 0.007μm (SE 0.005) in group I to 0.011μm (SE 0.004) in group II and 0.030μm (SE 0.004) in group III. Significant differences were also found for the oblique astigmatism and the third-order coma aberrations. Differences in the low levels of ocular spherical aberration in young children possibly reflect differences in lenticular spherical aberration and relate to the gradient refractive index of the lens. The evaluation of spherical aberration at certain stages of eye growth may help to better understand the underlying mechanisms of myopia development. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Ocular higher-order aberrations in a school children population
Papamastorakis, George; Panagopoulou, Sophia; Tsilimbaris, Militadis K.; Pallikaris, Ioannis G.; Plainis, Sotiris
2014-01-01
Purpose The primary objective of the study was to explore the statistics of ocular higher-order aberrations in a population of primary and secondary school children. Methods A sample of 557 children aged 10–15 years were selected from two primary and two secondary schools in Heraklion, Greece. Children were classified by age in three subgroups: group I (10.7 ± 0.5 years), group II (12.4 ± 0.5 years) and group III (14.5 ± 0.5 years). Ocular aberrations were measured using a wavefront aberrometer (COAS, AMO Wavefront Sciences, USA) at mesopic light levels (illuminance at cornea was 4 lux). Wavefront analysis was achieved for a 5 mm pupil. Statistical analysis was carried out for the right eye only. Results The average coefficient of most high-order aberrations did not differ from zero with the exception of vertical (0.076 μm) and horizontal (0.018 μm) coma, oblique trefoil (−0.055 μm) and spherical aberration (0.018 μm). The most prominent change between the three groups was observed for the spherical aberration, which increased from 0.007 μm (SE 0.005) in group I to 0.011 μm (SE 0.004) in group II and 0.030 μm (SE 0.004) in group III. Significant differences were also found for the oblique astigmatism and the third-order coma aberrations. Conclusions Differences in the low levels of ocular spherical aberration in young children possibly reflect differences in lenticular spherical aberration and relate to the gradient refractive index of the lens. The evaluation of spherical aberration at certain stages of eye growth may help to better understand the underlying mechanisms of myopia development. PMID:25288226
Quantification of uncertainties in the tsunami hazard for Cascadia using statistical emulation
NASA Astrophysics Data System (ADS)
Guillas, S.; Day, S. J.; Joakim, B.
2016-12-01
We present new high resolution tsunami wave propagation and coastal inundation for the Cascadia region in the Pacific Northwest. The coseismic representation in this analysis is novel, and more realistic than in previous studies, as we jointly parametrize multiple aspects of the seabed deformation. Due to the large computational cost of such simulators, statistical emulation is required in order to carry out uncertainty quantification tasks, as emulators efficiently approximate simulators. The emulator replaces the tsunami model VOLNA by a fast surrogate, so we are able to efficiently propagate uncertainties from the source characteristics to wave heights, in order to probabilistically assess tsunami hazard for Cascadia. We employ a new method for the design of the computer experiments in order to reduce the number of runs while maintaining good approximations properties of the emulator. Out of the initial nine parameters, mostly describing the geometry and time variation of the seabed deformation, we drop two parameters since these turn out to not have an influence on the resulting tsunami waves at the coast. We model the impact of another parameter linearly as its influence on the wave heights is identified as linear. We combine this screening approach with the sequential design algorithm MICE (Mutual Information for Computer Experiments), that adaptively selects the input values at which to run the computer simulator, in order to maximize the expected information gain (mutual information) over the input space. As a result, the emulation is made possible and accurate. Starting from distributions of the source parameters that encapsulate geophysical knowledge of the possible source characteristics, we derive distributions of the tsunami wave heights along the coastline.
Natural image statistics and low-complexity feature selection.
Vasconcelos, Manuela; Vasconcelos, Nuno
2009-02-01
Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Baker, Nathan A.; Li, Xiantao
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Seun; Lin, Guang; Sun, Xin
2013-01-01
Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.
Predicting long-term catchment nutrient export: the use of nonlinear time series models
NASA Astrophysics Data System (ADS)
Valent, Peter; Howden, Nicholas J. K.; Szolgay, Jan; Komornikova, Magda
2010-05-01
After the Second World War the nitrate concentrations in European water bodies changed significantly as the result of increased nitrogen fertilizer use and changes in land use. However, in the last decades, as a consequence of the implementation of nitrate-reducing measures in Europe, the nitrate concentrations in water bodies slowly decrease. This causes that the mean and variance of the observed time series also changes with time (nonstationarity and heteroscedascity). In order to detect changes and properly describe the behaviour of such time series by time series analysis, linear models (such as autoregressive (AR), moving average (MA) and autoregressive moving average models (ARMA)), are no more suitable. Time series with sudden changes in statistical characteristics can cause various problems in the calibration of traditional water quality models and thus give biased predictions. Proper statistical analysis of these non-stationary and heteroscedastic time series with the aim of detecting and subsequently explaining the variations in their statistical characteristics requires the use of nonlinear time series models. This information can be then used to improve the model building and calibration of conceptual water quality model or to select right calibration periods in order to produce reliable predictions. The objective of this contribution is to analyze two long time series of nitrate concentrations of the rivers Ouse and Stour with advanced nonlinear statistical modelling techniques and compare their performance with traditional linear models of the ARMA class in order to identify changes in the time series characteristics. The time series were analysed with nonlinear models with multiple regimes represented by self-exciting threshold autoregressive (SETAR) and Markov-switching models (MSW). The analysis showed that, based on the value of residual sum of squares (RSS) in both datasets, SETAR and MSW models described the time-series better than models of the ARMA class. In most cases the relative improvement of SETAR models against AR models of first order was low ranging between 1% and 4% with the exception of the three-regime model for the River Stour time-series where the improvement was 48.9%. In comparison, the relative improvement of MSW models was between 44.6% and 52.5 for two-regime and from 60.4% to 75% for three-regime models. However, the visual assessment of models plotted against original datasets showed that despite a high value of RSS, some ARMA models could describe the analyzed time-series better than AR, MA and SETAR models with lower values of RSS. In both datasets MSW models provided a very good visual fit describing most of the extreme values.
Brenner, Luis F
2015-12-01
To evaluate the changes in corneal higher-order aberrations (HOAs) and their impact on corneal higher-order Strehl ratio after aberration-free ablation profile. Verter Institute, H. Olhos, São Paulo, Brazil. Prospective interventional study. Eyes that had aberration-free myopic ablation were divided into 3 groups, based on the spherical equivalent (SE). The corneal HOAs and higher-order Strehl ratios were calculated before surgery and 3 months after surgery. The postoperative uncorrected-distance visual acuity, corrected-distance visual acuity, and SE did not present statistical differences among groups (88 eyes, P > .05). For a 6 mm pupil, the corneal HOA showed a mean increase of 0.17 μm (range 0.39 to 0.56 μm) (P < .001) and the corneal higher-order Strehl ratio presented a reduction of 0.03 (from 0.25 to 0.22) (P = .001). The following consistent linear predictive model was obtained: corneal HOA induction = 1.474 - 0.032 × SE - 0.225 × OZ, where OZ is the optical zone (R(2) = 0.49, adjusted R(2) = 0.48, P < .001). The corneal HOAs and the higher-order Strehl ratios deteriorated after moderate and high myopic ablations. The worsening in corneal aberrations and optical quality were related to the magnitude of the intended correction and did not affect high-contrast visual performance. The OZ was the only modifiable parameter capable to restrain the optical quality loss. The author has no financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ji, Zhong-Ye; Zhang, Xiao-Fang
2018-01-01
The mathematical relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam is important in beam quality control theory of the high-energy laser weapon system. In order to obtain this mathematical relation, numerical simulation is used in the research. Firstly, the Zernike representations of typically distorted atmospheric wavefront aberrations caused by the Kolmogoroff turbulence are generated. And then, the corresponding beam quality β factors of the different distorted wavefronts are calculated numerically through fast Fourier transform. Thus, the statistical distribution rule between the beam quality β factors of high-energy laser and the wavefront aberrations of the beam can be established by the calculated results. Finally, curve fitting method is chosen to establish the mathematical fitting relationship of these two parameters. And the result of the curve fitting shows that there is a quadratic curve relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam. And in this paper, 3 fitting curves, in which the wavefront aberrations are consisted of Zernike Polynomials of 20, 36, 60 orders individually, are established to express the relationship between the beam quality β factor and atmospheric wavefront aberrations with different spatial frequency.
Dowall, Stuart D; Graham, Victoria A; Tipton, Thomas R W; Hewson, Roger
2009-08-31
Work with highly pathogenic material mandates the use of biological containment facilities, involving microbiological safety cabinets and specialist laboratory engineering structures typified by containment level 3 (CL3) and CL4 laboratories. Consequences of working in high containment are the practical difficulties associated with containing specialist assays and equipment often essential for experimental analyses. In an era of increased interest in biodefence pathogens and emerging diseases, immunological analysis has developed rapidly alongside traditional techniques in virology and molecular biology. For example, in order to maximise the use of small sample volumes, multiplexing has become a more popular and widespread approach to quantify multiple analytes simultaneously, such as cytokines and chemokines. The luminex microsphere system allows for the detection of many cytokines and chemokines in a single sample, but the detection method of using aligned lasers and fluidics means that samples often have to be analysed in low containment facilities. In order to perform cytokine analysis in materials from high containment (CL3 and CL4 laboratories), we have developed an appropriate inactivation methodology after staining steps, which although results in a reduction of median fluorescent intensity, produces statistically comparable outcomes when judged against non-inactivated samples. This methodology thus extends the use of luminex technology for material that contains highly pathogenic biological agents.
Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble
NASA Astrophysics Data System (ADS)
Bicci, Alberto
2016-12-01
In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements.
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
Changes in Keratometric Values and Corneal High Order Aberrations After Hydrogel Inlay Implantation.
Whang, Woong-Joo; Yoo, Young-Sik; Joo, Choun-Ki; Yoon, Geunyoung
2017-01-01
We sought to analyze surgically induced refractive change (SIRC) and change in high-order aberration after Raindrop corneal inlay insertion (ReVision Optics, Lake Forest, CA), and assess the extent to which Raindrop corneal inlay insertion could correct presbyopia. Interventional case series. Seventeen patients were included if they had a corneal thickness ≥500 μm and a stable manifest spherical equivalent refraction between 0.50 and +1.00 diopters (D). The Raindrop corneal inlay was implanted on the stromal bed of a femtosecond laser-assisted generated flap of nondominant eyes. Manifest refraction, corneal powers, and corneal high-order aberrations were measured preoperatively and at 3 and 12 months postoperatively. The SIRC by manifest refraction was 0.99 ± 0.26 D. The changes derived from simulated keratometry (K), true net power, and equivalent K reading (EKR) at 1.0-4.0 mm were greater than the SIRC (all P < .01) while the change in EKR at 6.0 mm was less than the SIRC (P < .01). The changes in EKR 5.0 mm, automated K, and EKR 4.5 mm did not differ significantly from the SIRC (P = .81, .29, and .09, respectively), and the difference was the least for EKR 5.0 mm. In analysis of high-order aberrations, only spherical aberration showed statistically significant difference between preoperative and postoperative on both anterior cornea and total cornea (all P < .01). Raindrop corneal inlay corrects presbyopia via increasing negative spherical aberration. The equivalent K reading at 5.0 mm accurately reflected the SIRC, and would be applicable for intraocular power prediction before cataract surgery. Copyright © 2016 Elsevier Inc. All rights reserved.