Higher Order Cumulant Studies of Ocean Surface Random Fields from Satellite Altimeter Data
NASA Technical Reports Server (NTRS)
Cheng, B.
1996-01-01
Higher order statistics, especially 2nd order statistics, have been used to study ocean processes for many years in the past, and occupy an appreciable part of the research literature on physical oceanography. They in turn form part of a much larger field of study in statistical fluid mechanics.
The use of higher-order statistics in rapid object categorization in natural scenes.
Banno, Hayaki; Saiki, Jun
2015-02-04
We can rapidly and efficiently recognize many types of objects embedded in complex scenes. What information supports this object recognition is a fundamental question for understanding our visual processing. We investigated the eccentricity-dependent role of shape and statistical information for ultrarapid object categorization, using the higher-order statistics proposed by Portilla and Simoncelli (2000). Synthesized textures computed by their algorithms have the same higher-order statistics as the originals, while the global shapes were destroyed. We used the synthesized textures to manipulate the availability of shape information separately from the statistics. We hypothesized that shape makes a greater contribution to central vision than to peripheral vision and that statistics show the opposite pattern. Results did not show contributions clearly biased by eccentricity. Statistical information demonstrated a robust contribution not only in peripheral but also in central vision. For shape, the results supported the contribution in both central and peripheral vision. Further experiments revealed some interesting properties of the statistics. They are available for a limited time, attributable to the presence or absence of animals without shape, and predict how easily humans detect animals in original images. Our data suggest that when facing the time constraint of categorical processing, higher-order statistics underlie our significant performance for rapid categorization, irrespective of eccentricity. © 2015 ARVO.
Optical diagnosis of cervical cancer by higher order spectra and boosting
NASA Astrophysics Data System (ADS)
Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Barman, Ritwik; Pratiher, Souvik; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2017-03-01
In this contribution, we report the application of higher order statistical moments using decision tree and ensemble based learning methodology for the development of diagnostic algorithms for optical diagnosis of cancer. The classification results were compared to those obtained with an independent feature extractors like linear discriminant analysis (LDA). The performance and efficacy of these methodology using higher order statistics as a classifier using boosting has higher specificity and sensitivity while being much faster as compared to other time-frequency domain based methods.
Is Statistical Learning Constrained by Lower Level Perceptual Organization?
Emberson, Lauren L.; Liu, Ran; Zevin, Jason D.
2013-01-01
In order for statistical information to aid in complex developmental processes such as language acquisition, learning from higher-order statistics (e.g. across successive syllables in a speech stream to support segmentation) must be possible while perceptual abilities (e.g. speech categorization) are still developing. The current study examines how perceptual organization interacts with statistical learning. Adult participants were presented with multiple exemplars from novel, complex sound categories designed to reflect some of the spectral complexity and variability of speech. These categories were organized into sequential pairs and presented such that higher-order statistics, defined based on sound categories, could support stream segmentation. Perceptual similarity judgments and multi-dimensional scaling revealed that participants only perceived three perceptual clusters of sounds and thus did not distinguish the four experimenter-defined categories, creating a tension between lower level perceptual organization and higher-order statistical information. We examined whether the resulting pattern of learning is more consistent with statistical learning being “bottom-up,” constrained by the lower levels of organization, or “top-down,” such that higher-order statistical information of the stimulus stream takes priority over the perceptual organization, and perhaps influences perceptual organization. We consistently find evidence that learning is constrained by perceptual organization. Moreover, participants generalize their learning to novel sounds that occupy a similar perceptual space, suggesting that statistical learning occurs based on regions of or clusters in perceptual space. Overall, these results reveal a constraint on learning of sound sequences, such that statistical information is determined based on lower level organization. These findings have important implications for the role of statistical learning in language acquisition. PMID:23618755
I. Arismendi; S. L. Johnson; J. B. Dunham
2015-01-01
Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical...
Higher-Order Statistical Correlations and Mutual Information Among Particles in a Quantum Well
NASA Astrophysics Data System (ADS)
Yépez, V. S.; Sagar, R. P.; Laguna, H. G.
2017-12-01
The influence of wave function symmetry on statistical correlation is studied for the case of three non-interacting spin-free quantum particles in a unidimensional box, in position and in momentum space. Higher-order statistical correlations occurring among the three particles in this quantum system is quantified via higher-order mutual information and compared to the correlation between pairs of variables in this model, and to the correlation in the two-particle system. The results for the higher-order mutual information show that there are states where the symmetric wave functions are more correlated than the antisymmetric ones with same quantum numbers. This holds in position as well as in momentum space. This behavior is opposite to that observed for the correlation between pairs of variables in this model, and the two-particle system, where the antisymmetric wave functions are in general more correlated. These results are also consistent with those observed in a system of three uncoupled oscillators. The use of higher-order mutual information as a correlation measure, is monitored and examined by considering a superposition of states or systems with two Slater determinants.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
NASA Astrophysics Data System (ADS)
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
Higher-order cumulants and spectral kurtosis for early detection of subterranean termites
NASA Astrophysics Data System (ADS)
de la Rosa, Juan José González; Moreno Muñoz, Antonio
2008-02-01
This paper deals with termite detection in non-favorable SNR scenarios via signal processing using higher-order statistics. The results could be extrapolated to all impulse-like insect emissions; the situation involves non-destructive termite detection. Fourth-order cumulants in time and frequency domains enhance the detection and complete the characterization of termite emissions, non-Gaussian in essence. Sliding higher-order cumulants offer distinctive time instances, as a complement to the sliding variance, which only reveal power excesses in the signal; even for low-amplitude impulses. The spectral kurtosis reveals non-Gaussian characteristics (the peakedness of the probability density function) associated to these non-stationary measurements, specially in the near ultrasound frequency band. Contrasted estimators have been used to compute the higher-order statistics. The inedited findings are shown via graphical examples.
On use of the multistage dose-response model for assessing laboratory animal carcinogenicity
Nitcheva, Daniella; Piegorsch, Walter W.; West, R. Webster
2007-01-01
We explore how well a statistical multistage model describes dose-response patterns in laboratory animal carcinogenicity experiments from a large database of quantal response data. The data are collected from the U.S. EPA’s publicly available IRIS data warehouse and examined statistically to determine how often higher-order values in the multistage predictor yield significant improvements in explanatory power over lower-order values. Our results suggest that the addition of a second-order parameter to the model only improves the fit about 20% of the time, while adding even higher-order terms apparently does not contribute to the fit at all, at least with the study designs we captured in the IRIS database. Also included is an examination of statistical tests for assessing significance of higher-order terms in a multistage dose-response model. It is noted that bootstrap testing methodology appears to offer greater stability for performing the hypothesis tests than a more-common, but possibly unstable, “Wald” test. PMID:17490794
On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs
NASA Technical Reports Server (NTRS)
Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.
High order statistical signatures from source-driven measurements of subcritical fissile systems
NASA Astrophysics Data System (ADS)
Mattingly, John Kelly
1998-11-01
This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.
Order statistics applied to the most massive and most distant galaxy clusters
NASA Astrophysics Data System (ADS)
Waizmann, J.-C.; Ettori, S.; Bartelmann, M.
2013-06-01
In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Wolff, Hans-Georg; Preising, Katja
2005-02-01
To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development. The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/.
Adaptive interference cancel filter for evoked potential using high-order cumulants.
Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei
2004-01-01
This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.
Complex Sequencing Rules of Birdsong Can be Explained by Simple Hidden Markov Processes
Katahira, Kentaro; Suzuki, Kenta; Okanoya, Kazuo; Okada, Masato
2011-01-01
Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors such as human speech and musical performance, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical properties of the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable labeles, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model; GMM), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex behavioral sequences with higher-order dependencies. PMID:21915345
Estimating procedure times for surgeries by determining location parameters for the lognormal model.
Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H
2004-05-01
We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.
Adaptation to changes in higher-order stimulus statistics in the salamander retina.
Tkačik, Gašper; Ghosh, Anandamohan; Schneidman, Elad; Segev, Ronen
2014-01-01
Adaptation in the retina is thought to optimize the encoding of natural light signals into sequences of spikes sent to the brain. While adaptive changes in retinal processing to the variations of the mean luminance level and second-order stimulus statistics have been documented before, no such measurements have been performed when higher-order moments of the light distribution change. We therefore measured the ganglion cell responses in the tiger salamander retina to controlled changes in the second (contrast), third (skew) and fourth (kurtosis) moments of the light intensity distribution of spatially uniform temporally independent stimuli. The skew and kurtosis of the stimuli were chosen to cover the range observed in natural scenes. We quantified adaptation in ganglion cells by studying linear-nonlinear models that capture well the retinal encoding properties across all stimuli. We found that the encoding properties of retinal ganglion cells change only marginally when higher-order statistics change, compared to the changes observed in response to the variation in contrast. By analyzing optimal coding in LN-type models, we showed that neurons can maintain a high information rate without large dynamic adaptation to changes in skew or kurtosis. This is because, for uncorrelated stimuli, spatio-temporal summation within the receptive field averages away non-gaussian aspects of the light intensity distribution.
NASA Astrophysics Data System (ADS)
Qi, D.; Majda, A.
2017-12-01
A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.
NASA Astrophysics Data System (ADS)
Salvato, Steven Walter
The purpose of this study was to analyze questions within the chapters of a nontraditional general chemistry textbook and the four general chemistry textbooks most widely used by Texas community colleges in order to determine if the questions require higher- or lower-order thinking according to Bloom's taxonomy. The study employed quantitative methods. Bloom's taxonomy (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) was utilized as the main instrument in the study. Additional tools were used to help classify the questions into the proper category of the taxonomy (McBeath, 1992; Metfessel, Michael, & Kirsner, 1969). The top four general chemistry textbooks used in Texas community colleges and Chemistry: A Project of the American Chemical Society (Bell et al., 2005) were analyzed during the fall semester of 2010 in order to categorize the questions within the chapters into one of the six levels of Bloom's taxonomy. Two coders were used to assess reliability. The data were analyzed using descriptive and inferential methods. The descriptive method involved calculation of the frequencies and percentages of coded questions from the books as belonging to the six categories of the taxonomy. Questions were dichotomized into higher- and lower-order thinking questions. The inferential methods involved chi-square tests of association to determine if there were statistically significant differences among the four traditional college general chemistry textbooks in the proportions of higher- and lower-order questions and if there were statistically significant differences between the nontraditional chemistry textbook and the four traditional general chemistry textbooks. Findings indicated statistically significant differences among the four textbooks frequently used in Texas community colleges in the number of higher- and lower-level questions. Statistically significant differences were also found among the four textbooks and the nontraditional textbook. After the analysis of the data, conclusions were drawn, implications for practice were delineated, and recommendations for future research were given.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
Reed, Donovan S; Apsey, Douglas; Steigleman, Walter; Townley, James; Caldwell, Matthew
2017-11-01
In an attempt to maximize treatment outcomes, refractive surgery techniques are being directed toward customized ablations to correct not only lower-order aberrations but also higher-order aberrations specific to the individual eye. Measurement of the entirety of ocular aberrations is the most definitive means to establish the true effect of refractive surgery on image quality and visual performance. Whether or not there is a statistically significant difference in induced higher-order corneal aberrations between the VISX Star S4 (Abbott Medical Optics, Santa Ana, California) and the WaveLight EX500 (Alcon, Fort Worth, Texas) lasers was examined. A retrospective analysis was performed to investigate the difference in root-mean-square (RMS) value of the higher-order corneal aberrations postoperatively between two currently available laser platforms, the VISX Star S4 and the WaveLight EX500 lasers. The RMS is a compilation of higher-order corneal aberrations. Data from 240 total eyes of active duty military or Department of Defense beneficiaries who completed photorefractive keratectomy (PRK) or laser in situ keratomileusis (LASIK) refractive surgery at the Wilford Hall Ambulatory Surgical Center Joint Warfighter Refractive Surgery Center were examined. Using SPSS statistics software (IBM Corp., Armonk, New York), the mean changes in RMS values between the two lasers and refractive surgery procedures were determined. A Student t test was performed to compare the RMS of the higher-order aberrations of the subjects' corneas from the lasers being studied. A regression analysis was performed to adjust for preoperative spherical equivalent. The study and a waiver of informed consent have been approved by the Clinical Research Division of the 59th Medical Wing Institutional Review Board (Protocol Number: 20150093H). The mean change in RMS value for PRK using the VISX laser was 0.00122, with a standard deviation of 0.02583. The mean change in RMS value for PRK using the WaveLight EX500 laser was 0.004323, with a standard deviation of 0.02916. The mean change in RMS value for LASIK using the VISX laser was 0.00841, with a standard deviation of 0.03011. The mean change in RMS value for LASIK using the WaveLight EX500 laser was 0.0174, with a standard deviation of 0.02417. When comparing the two lasers for PRK and LASIK procedures, the p values were 0.431 and 0.295, respectively. The results of this study suggest no statistically significant difference concerning induced higher-order aberrations between the two laser platforms for either LASIK or PRK. Overall, the VISX laser did have consistently lower induced higher-order aberrations postoperatively, but this did not reach statistical significance. It is likely the statistical significance of this study was hindered by the power, given the relatively small sample size. Additional limitations of the study include its design, being a retrospective analysis, and the generalizability of the study, as the Department of Defense population may be significantly different from the typical refractive surgery population in terms of overall health and preoperative refractive error. Further investigation of visual outcomes between the two laser platforms should be investigated before determining superiority in terms of visual image and quality postoperatively. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu
2017-11-01
This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.
Computational Complexity of Bosons in Linear Networks
2017-03-01
photon statistics while strongly reducing emission probabilities: thus leading experimental teams pursuing large-scale BOSONSAMPLING have faced a hard...Potentially, this could motivate new validation protocols exploiting statistics that include this temporal degree of freedom. The impact of...photon- statistics polluted by higher-order terms, which can be mistakenly interpreted as decreased photon-indistinguishability. In fact, in many cases
McGinnigle, Samantha; Eperjesi, Frank; Naroo, Shehzad A
2014-04-01
To study the effects of ocular lubricants on higher order aberrations in normal and self-diagnosed dry eyes. Unpreserved hypromellose drops, Tears Again™ liposome spray and a combination of both were administered to the right eye of 24 normal and 24 dry eye subjects following classification according to a 5 point questionnaire. Total ocular higher order aberrations, coma, spherical aberration and Strehl ratios for higher order aberrations were measured using the Nidek OPD-Scan III (Nidek Technologies, Gamagori, Japan) at baseline, immediately after application and after 60 min. The aberration data were analyzed over a 5mm natural pupil using Zernike polynomials. Each intervention was assessed on a separate day and comfort levels were recorded before and after application. Corneal staining was assessed and product preference recorded after the final measurement for each intervention. Hypromellose drops caused an increase in total higher order aberrations (p=<0.01 in normal and dry eyes) and a reduction in Strehl ratio (normal eyes: p=<0.01, dry eyes p=0.01) immediately after instillation. There were no significant differences between normal and self-diagnosed dry eyes for response to intervention and no improvement in visual quality or reduction in higher order aberrations after 60 min. Differences in comfort levels failed to reach statistical significance. Combining treatments does not offer any benefit over individual treatments in self-diagnosed dry eyes and no individual intervention reached statistical significance. Symptomatic subjects with dry eye and no corneal staining reported an improvement in comfort after using lubricants. Copyright © 2013 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Detecting higher-order interactions among the spiking events in a group of neurons.
Martignon, L; Von Hasseln, H; Grün, S; Aertsen, A; Palm, G
1995-06-01
We propose a formal framework for the description of interactions among groups of neurons. This framework is not restricted to the common case of pair interactions, but also incorporates higher-order interactions, which cannot be reduced to lower-order ones. We derive quantitative measures to detect the presence of such interactions in experimental data, by statistical analysis of the frequency distribution of higher-order correlations in multiple neuron spike train data. Our first step is to represent a frequency distribution as a Markov field on the minimal graph it induces. We then show the invariance of this graph with regard to changes of state. Clearly, only linear Markov fields can be adequately represented by graphs. Higher-order interdependencies, which are reflected by the energy expansion of the distribution, require more complex graphical schemes, like constellations or assembly diagrams, which we introduce and discuss. The coefficients of the energy expansion not only point to the interactions among neurons but are also a measure of their strength. We investigate the statistical meaning of detected interactions in an information theoretic sense and propose minimum relative entropy approximations as null hypotheses for significance tests. We demonstrate the various steps of our method in the situation of an empirical frequency distribution on six neurons, extracted from data on simultaneous multineuron recordings from the frontal cortex of a behaving monkey and close with a brief outlook on future work.
Higher-order nonclassicalities of finite dimensional coherent states: A comparative study
NASA Astrophysics Data System (ADS)
Alam, Nasir; Verma, Amit; Pathak, Anirban
2018-07-01
Conventional coherent states (CSs) are defined in various ways. For example, CS is defined as an infinite Poissonian expansion in Fock states, as displaced vacuum state, or as an eigenket of annihilation operator. In the infinite dimensional Hilbert space, these definitions are equivalent. However, these definitions are not equivalent for the finite dimensional systems. In this work, we present a comparative description of the lower- and higher-order nonclassical properties of the finite dimensional CSs which are also referred to as qudit CSs (QCSs). For the comparison, nonclassical properties of two types of QCSs are used: (i) nonlinear QCS produced by applying a truncated displacement operator on the vacuum and (ii) linear QCS produced by the Poissonian expansion in Fock states of the CS truncated at (d - 1)-photon Fock state. The comparison is performed using a set of nonclassicality witnesses (e.g., higher order antibunching, higher order sub-Poissonian statistics, higher order squeezing, Agarwal-Tara parameter, Klyshko's criterion) and a set of quantitative measures of nonclassicality (e.g., negativity potential, concurrence potential and anticlassicality). The higher order nonclassicality witnesses have found to reveal the existence of higher order nonclassical properties of QCS for the first time.
Transformation of general binary MRF minimization to the first-order case.
Ishikawa, Hiroshi
2011-06-01
We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.
Moment-Based Physical Models of Broadband Clutter due to Aggregations of Fish
2013-09-30
statistical models for signal-processing algorithm development. These in turn will help to develop a capability to statistically forecast the impact of...aggregations of fish based on higher-order statistical measures describable in terms of physical and system parameters. Environmentally , these models...processing. In this experiment, we had good ground truth on (1) and (2), and had control over (3) and (4) except for environmentally -imposed restrictions
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
Symmetries, invariants and generating functions: higher-order statistics of biased tracers
NASA Astrophysics Data System (ADS)
Munshi, Dipak
2018-01-01
Gravitationally collapsed objects are known to be biased tracers of an underlying density contrast. Using symmetry arguments, generalised biasing schemes have recently been developed to relate the halo density contrast δh with the underlying density contrast δ, divergence of velocity θ and their higher-order derivatives. This is done by constructing invariants such as s, t, ψ,η. We show how the generating function formalism in Eulerian standard perturbation theory (SPT) can be used to show that many of the additional terms based on extended Galilean and Lifshitz symmetry actually do not make any contribution to the higher-order statistics of biased tracers. Other terms can also be drastically simplified allowing us to write the vertices associated with δh in terms of the vertices of δ and θ, the higher-order derivatives and the bias coefficients. We also compute the cumulant correlators (CCs) for two different tracer populations. These perturbative results are valid for tree-level contributions but at an arbitrary order. We also take into account the stochastic nature bias in our analysis. Extending previous results of a local polynomial model of bias, we express the one-point cumulants Script SN and their two-point counterparts, the CCs i.e. Script Cpq, of biased tracers in terms of that of their underlying density contrast counterparts. As a by-product of our calculation we also discuss the results using approximations based on Lagrangian perturbation theory (LPT).
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2000-01-01
The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.
NASA Astrophysics Data System (ADS)
Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.
2007-03-01
Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.
Comparing Data Sets: Implicit Summaries of the Statistical Properties of Number Sets
ERIC Educational Resources Information Center
Morris, Bradley J.; Masnick, Amy M.
2015-01-01
Comparing datasets, that is, sets of numbers in context, is a critical skill in higher order cognition. Although much is known about how people compare single numbers, little is known about how number sets are represented and compared. We investigated how subjects compared datasets that varied in their statistical properties, including ratio of…
ERIC Educational Resources Information Center
Mocko, Megan; Lesser, Lawrence M.; Wagler, Amy E.; Francis, Wendy S.
2017-01-01
Mnemonics (memory aids) are often viewed as useful in helping students recall information, and thereby possibly reducing stress and freeing up more cognitive resources for higher-order thinking. However, there has been little research on statistics mnemonics, especially for large classes. This article reports on the results of a study conducted…
Vehicle track segmentation using higher order random fields
Quach, Tu -Thach
2017-01-09
Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less
ppcor: An R Package for a Fast Calculation to Semi-partial Correlation Coefficients.
Kim, Seongho
2015-11-01
Lack of a general matrix formula hampers implementation of the semi-partial correlation, also known as part correlation, to the higher-order coefficient. This is because the higher-order semi-partial correlation calculation using a recursive formula requires an enormous number of recursive calculations to obtain the correlation coefficients. To resolve this difficulty, we derive a general matrix formula of the semi-partial correlation for fast computation. The semi-partial correlations are then implemented on an R package ppcor along with the partial correlation. Owing to the general matrix formulas, users can readily calculate the coefficients of both partial and semi-partial correlations without computational burden. The package ppcor further provides users with the level of the statistical significance with its test statistic.
Vehicle track segmentation using higher order random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quach, Tu -Thach
Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less
Arismendi, Ivan; Johnson, Sherri L.; Dunham, Jason B.
2015-01-01
Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical moments (skewness and kurtosis) to examine potential changes of empirical distributions at decadal extents. Second, we adapt a statistical procedure combining a non-metric multidimensional scaling technique and higher density region plots to detect potentially anomalous years. We illustrate the use of these approaches by examining long-term stream temperature data from minimally and highly human-influenced streams. In particular, we contrast predictions about thermal regime responses to changing climates and human-related water uses. Using these methods, we effectively diagnose years with unusual thermal variability and patterns in variability through time, as well as spatial variability linked to regional and local factors that influence stream temperature. Our findings highlight the complexity of responses of thermal regimes of streams and reveal their differential vulnerability to climate warming and human-related water uses. The two approaches presented here can be applied with a variety of other continuous phenomena to address historical changes, extreme events, and their associated ecological responses.
Local image statistics: maximum-entropy constructions and perceptual salience
Victor, Jonathan D.; Conte, Mary M.
2012-01-01
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics—including luminance distributions, pair-wise correlations, and higher-order correlations—are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions. PMID:22751397
Out-of-time-order fluctuation-dissipation theorem
NASA Astrophysics Data System (ADS)
Tsuji, Naoto; Shitara, Tomohiro; Ueda, Masahito
2018-01-01
We prove a generalized fluctuation-dissipation theorem for a certain class of out-of-time-ordered correlators (OTOCs) with a modified statistical average, which we call bipartite OTOCs, for general quantum systems in thermal equilibrium. The difference between the bipartite and physical OTOCs defined by the usual statistical average is quantified by a measure of quantum fluctuations known as the Wigner-Yanase skew information. Within this difference, the theorem describes a universal relation between chaotic behavior in quantum systems and a nonlinear-response function that involves a time-reversed process. We show that the theorem can be generalized to higher-order n -partite OTOCs as well as in the form of generalized covariance.
Enseignement Supérieur Et Origine Sociale En France: Étude Statistique Des Inégalités Depuis 1965
NASA Astrophysics Data System (ADS)
Jaoul, Magali
2004-11-01
HIGHER EDUCATION AND SOCIAL ORIGIN IN FRANCE: A STATISTICAL STUDY OF INEQUALITIES SINCE 1965 - Mass education has the goal of guaranteeing the same education to all in order to moderate differences between individuals and promote a kind of `equality of opportunity'. Nonetheless, it seems clear that lower-class youths do not benefit as much from their degree or university experience as do those who come from more privileged backgrounds. The present study statistically analyses the evolution of higher education since 1965 with respect to social origin in order to determine whether the massification of education has really been accompanied by democratization. Its conclusion is twofold: This evolution has indeed allowed for a certain democratization of higher education by offering new perspectives for the middle and lower classes; but nevertheless it has not always granted them access to prestigious courses of study, so that one still finds two systems of higher education which are relatively separate and whose separation remains a function of social origin.
Independent component analysis for automatic note extraction from musical trills
NASA Astrophysics Data System (ADS)
Brown, Judith C.; Smaragdis, Paris
2004-05-01
The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.
A general solution strategy of modified power method for higher mode solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr
2016-01-15
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less
Collective flow measurements with HADES in Au+Au collisions at 1.23A GeV
NASA Astrophysics Data System (ADS)
Kardan, Behruz; Hades Collaboration
2017-11-01
HADES has a large acceptance combined with a good mass-resolution and therefore allows the study of dielectron and hadron production in heavy-ion collisions with unprecedented precision. With the statistics of seven billion Au-Au collisions at 1.23A GeV recorded in 2012, the investigation of higher-order flow harmonics is possible. At the BEVALAC and SIS18 directed and elliptic flow has been measured for pions, charged kaons, protons, neutrons and fragments, but higher-order harmonics have not yet been studied. They provide additional important information on the properties of the dense hadronic medium produced in heavy-ion collisions. We present here a high-statistics, multidifferential measurement of v1 and v2 for protons in Au+Au collisions at 1.23A GeV.
Naik, Ganesh R; Kumar, Dinesh K
2011-01-01
The electromyograpy (EMG) signal provides information about the performance of muscles and nerves. The shape of the muscle signal and motor unit action potential (MUAP) varies due to the movement of the position of the electrode or due to changes in contraction level. This research deals with evaluating the non-Gaussianity in Surface Electromyogram signal (sEMG) using higher order statistics (HOS) parameters. To achieve this, experiments were conducted for four different finger and wrist actions at different levels of Maximum Voluntary Contractions (MVCs). Our experimental analysis shows that at constant force and for non-fatiguing contractions, probability density functions (PDF) of sEMG signals were non-Gaussian. For lesser MVCs (below 30% of MVC) PDF measures tends to be Gaussian process. The above measures were verified by computing the Kurtosis values for different MVCs.
Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images
Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049
Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.
Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.
Non-gaussian statistics of pencil beam surveys
NASA Technical Reports Server (NTRS)
Amendola, Luca
1994-01-01
We study the effect of the non-Gaussian clustering of galaxies on the statistics of pencil beam surveys. We derive the probability from the power spectrum peaks by means of Edgeworth expansion and find that the higher order moments of the galaxy distribution play a dominant role. The probability of obtaining the 128 Mpc/h periodicity found in pencil beam surveys is raised by more than one order of magnitude, up to 1%. Further data are needed to decide if non-Gaussian distribution alone is sufficient to explain the 128 Mpc/h periodicity, or if extra large-scale power is necessary.
Multiple Choice Questions Can Be Designed or Revised to Challenge Learners' Critical Thinking
ERIC Educational Resources Information Center
Tractenberg, Rochelle E.; Gushta, Matthew M.; Mulroney, Susan E.; Weissinger, Peggy A.
2013-01-01
Multiple choice (MC) questions from a graduate physiology course were evaluated by cognitive-psychology (but not physiology) experts, and analyzed statistically, in order to test the independence of content expertise and cognitive complexity ratings of MC items. Integration of higher order thinking into MC exams is important, but widely known to…
Blind channel estimation and deconvolution in colored noise using higher-order cumulants
NASA Astrophysics Data System (ADS)
Tugnait, Jitendra K.; Gummadavelli, Uma
1994-10-01
Existing approaches to blind channel estimation and deconvolution (equalization) focus exclusively on channel or inverse-channel impulse response estimation. It is well-known that the quality of the deconvolved output depends crucially upon the noise statistics also. Typically it is assumed that the noise is white and the signal-to-noise ratio is known. In this paper we remove these restrictions. Both the channel impulse response and the noise model are estimated from the higher-order (fourth, e.g.) cumulant function and the (second-order) correlation function of the received data via a least-squares cumulant/correlation matching criterion. It is assumed that the noise higher-order cumulant function vanishes (e.g., Gaussian noise, as is the case for digital communications). Consistency of the proposed approach is established under certain mild sufficient conditions. The approach is illustrated via simulation examples involving blind equalization of digital communications signals.
Higher order statistics of planetary gravities and topographies
NASA Technical Reports Server (NTRS)
Kaula, William M.
1993-01-01
The statistical properties of Earth, Venus, Mars, Moon, and a 3-D mantle convection model are compared. The higher order properties are expressed by third and fourth moments: i.e., as mean products over equilateral triangles (defined as coskewance) and equilateral quadrangles (defined as coexance). For point values, all the fields of real planets have positive skewness, ranging from slightly above zero for Lunar gravity to 2.6 sigma(exp 3) for Martian gravity (sigma is rms magnitude). Six of the eight excesses are greater than Gaussian (3 sigma(exp 4)), ranging from 2.0 sigma(exp 4) for Earth topography to 18.6 sigma(exp 4), for Martian topography. The coskewances and coexances drop off to zero within 20 deg arc in most cases. The mantle convective model has zero skewness and excess slightly less than Gaussian, probably arising from viscosity variations being only radial.
NASA Astrophysics Data System (ADS)
Skitka, J.; Marston, B.; Fox-Kemper, B.
2016-02-01
Sub-grid turbulence models for planetary boundary layers are typically constructed additively, starting with local flow properties and including non-local (KPP) or higher order (Mellor-Yamada) parameters until a desired level of predictive capacity is achieved or a manageable threshold of complexity is surpassed. Such approaches are necessarily limited in general circumstances, like global circulation models, by their being optimized for particular flow phenomena. By building a model reductively, starting with the infinite hierarchy of turbulence statistics, truncating at a given order, and stripping degrees of freedom from the flow, we offer the prospect a turbulence model and investigative tool that is equally applicable to all flow types and able to take full advantage of the wealth of nonlocal information in any flow. Direct statistical simulation (DSS) that is based upon expansion in equal-time cumulants can be used to compute flow statistics of arbitrary order. We investigate the feasibility of a second-order closure (CE2) by performing simulations of the ocean boundary layer in a quasi-linear approximation for which CE2 is exact. As oceanographic examples, wind-driven Langmuir turbulence and thermal convection are studied by comparison of the quasi-linear and fully nonlinear statistics. We also characterize the computational advantages and physical uncertainties of CE2 defined on a reduced basis determined via proper orthogonal decomposition (POD) of the flow fields.
Textural content in 3T MR: an image-based marker for Alzheimer's disease
NASA Astrophysics Data System (ADS)
Bharath Kumar, S. V.; Mullick, Rakesh; Patil, Uday
2005-04-01
In this paper, we propose a study, which investigates the first-order and second-order distributions of T2 images from a magnetic resonance (MR) scan for an age-matched data set of 24 Alzheimer's disease and 17 normal patients. The study is motivated by the desire to analyze the brain iron uptake in the hippocampus of Alzheimer's patients, which is captured by low T2 values. Since, excess iron deposition occurs locally in certain regions of the brain, we are motivated to investigate the spatial distribution of T2, which is captured by higher-order statistics. Based on the first-order and second-order distributions (involving gray level co-occurrence matrix) of T2, we show that the second-order statistics provide features with sensitivity >90% (at 80% specificity), which in turn capture the textural content in T2 data. Hence, we argue that different texture characteristics of T2 in the hippocampus for Alzheimer's and normal patients could be used as an early indicator of Alzheimer's disease.
ERIC Educational Resources Information Center
Bernard, Robert M.; Borokhovski, Eugene; Schmid, Richard F.; Tamim, Rana M.
2014-01-01
This article contains a second-order meta-analysis and an exploration of bias in the technology integration literature in higher education. Thirteen meta-analyses, dated from 2000 to 2014 were selected to be included based on the questions asked and the presence of adequate statistical information to conduct a quantitative synthesis. The weighted…
2009-01-01
In high-dimensional studies such as genome-wide association studies, the correction for multiple testing in order to control total type I error results in decreased power to detect modest effects. We present a new analytical approach based on the higher criticism statistic that allows identification of the presence of modest effects. We apply our method to the genome-wide study of rheumatoid arthritis provided in the Genetic Analysis Workshop 16 Problem 1 data set. There is evidence for unknown bias in this study that could be explained by the presence of undetected modest effects. We compared the asymptotic and empirical thresholds for the higher criticism statistic. Using the asymptotic threshold we detected the presence of modest effects genome-wide. We also detected modest effects using 90th percentile of the empirical null distribution as a threshold; however, there is no such evidence when the 95th and 99th percentiles were used. While the higher criticism method suggests that there is some evidence for modest effects, interpreting individual single-nucleotide polymorphisms with significant higher criticism statistics is of undermined value. The goal of higher criticism is to alert the researcher that genetic effects remain to be discovered and to promote the use of more targeted and powerful studies to detect the remaining effects. PMID:20018032
ANCA: Anharmonic Conformational Analysis of Biomolecular Simulations.
Parvatikar, Akash; Vacaliuc, Gabriel S; Ramanathan, Arvind; Chennubhotla, S Chakra
2018-05-08
Anharmonicity in time-dependent conformational fluctuations is noted to be a key feature of functional dynamics of biomolecules. Although anharmonic events are rare, long-timescale (μs-ms and beyond) simulations facilitate probing of such events. We have previously developed quasi-anharmonic analysis to resolve higher-order spatial correlations and characterize anharmonicity in biomolecular simulations. In this article, we have extended this toolbox to resolve higher-order temporal correlations and built a scalable Python package called anharmonic conformational analysis (ANCA). ANCA has modules to: 1) measure anharmonicity in the form of higher-order statistics and its variation as a function of time, 2) output a storyboard representation of the simulations to identify key anharmonic conformational events, and 3) identify putative anharmonic conformational substates and visualization of transitions between these substates. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Theory and Experimental and Chemical Instabilities
1989-01-31
Thresholds, Hysteresis, and Neuromodulation of Signal-to-Noise; and Statistical-Mechanical Theory of Many-body Effects in Reaction Rates. T Ic 2 UL3...submitted to the Journal of Physical Chemistry. 6. Noise in Neural Networks: Thresholds, Hysteresis, and Neuromodulation of Signal-to-Noise. We study a...neural-network model including Gaussian noise, higher-order neuronal interactions, and neuromodulation . For a first-order network, there is a
Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models
NASA Technical Reports Server (NTRS)
Buchert, T.; Melott, A. L.; Weiss, A. G.
1993-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.
Texture analysis with statistical methods for wheat ear extraction
NASA Astrophysics Data System (ADS)
Bakhouche, M.; Cointault, F.; Gouton, P.
2007-01-01
In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.
Detecting higher spin fields through statistical anisotropy in the CMB and galaxy power spectra
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Kehagias, Alex; Liguori, Michele; Riotto, Antonio; Shiraishi, Maresuke; Tansella, Vittorio
2018-01-01
Primordial inflation may represent the most powerful collider to test high-energy physics models. In this paper we study the impact on the inflationary power spectrum of the comoving curvature perturbation in the specific model where massive higher spin fields are rendered effectively massless during a de Sitter epoch through suitable couplings to the inflaton field. In particular, we show that such fields with spin s induce a distinctive statistical anisotropic signal on the power spectrum, in such a way that not only the usual g2 M-statistical anisotropy coefficients, but also higher-order ones (i.e., g4 M,g6 M,…,g(2 s -2 )M and g(2 s )M) are nonvanishing. We examine their imprints in the cosmic microwave background and galaxy power spectra. Our Fisher matrix forecasts indicate that the detectability of gL M depends very weakly on L : all coefficients could be detected in near future if their magnitudes are bigger than about 10-3.
Practical steganalysis of digital images: state of the art
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2002-04-01
Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2001-01-01
The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.
Palmer, Edward J; Devitt, Peter G
2007-01-01
Background Reliable and valid written tests of higher cognitive function are difficult to produce, particularly for the assessment of clinical problem solving. Modified Essay Questions (MEQs) are often used to assess these higher order abilities in preference to other forms of assessment, including multiple-choice questions (MCQs). MEQs often form a vital component of end-of-course assessments in higher education. It is not clear how effectively these questions assess higher order cognitive skills. This study was designed to assess the effectiveness of the MEQ to measure higher-order cognitive skills in an undergraduate institution. Methods An analysis of multiple-choice questions and modified essay questions (MEQs) used for summative assessment in a clinical undergraduate curriculum was undertaken. A total of 50 MCQs and 139 stages of MEQs were examined, which came from three exams run over two years. The effectiveness of the questions was determined by two assessors and was defined by the questions ability to measure higher cognitive skills, as determined by a modification of Bloom's taxonomy, and its quality as determined by the presence of item writing flaws. Results Over 50% of all of the MEQs tested factual recall. This was similar to the percentage of MCQs testing factual recall. The modified essay question failed in its role of consistently assessing higher cognitive skills whereas the MCQ frequently tested more than mere recall of knowledge. Conclusion Construction of MEQs, which will assess higher order cognitive skills cannot be assumed to be a simple task. Well-constructed MCQs should be considered a satisfactory replacement for MEQs if the MEQs cannot be designed to adequately test higher order skills. Such MCQs are capable of withstanding the intellectual and statistical scrutiny imposed by a high stakes exit examination. PMID:18045500
Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.
Zhou, Weidong; Gotman, Jean
2004-01-01
In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.
Exact lower and upper bounds on stationary moments in stochastic biochemical systems
NASA Astrophysics Data System (ADS)
Ghusinga, Khem Raj; Vargas-Garcia, Cesar A.; Lamperski, Andrew; Singh, Abhyudai
2017-08-01
In the stochastic description of biochemical reaction systems, the time evolution of statistical moments for species population counts is described by a linear dynamical system. However, except for some ideal cases (such as zero- and first-order reaction kinetics), the moment dynamics is underdetermined as lower-order moments depend upon higher-order moments. Here, we propose a novel method to find exact lower and upper bounds on stationary moments for a given arbitrary system of biochemical reactions. The method exploits the fact that statistical moments of any positive-valued random variable must satisfy some constraints that are compactly represented through the positive semidefiniteness of moment matrices. Our analysis shows that solving moment equations at steady state in conjunction with constraints on moment matrices provides exact lower and upper bounds on the moments. These results are illustrated by three different examples—the commonly used logistic growth model, stochastic gene expression with auto-regulation and an activator-repressor gene network motif. Interestingly, in all cases the accuracy of the bounds is shown to improve as moment equations are expanded to include higher-order moments. Our results provide avenues for development of approximation methods that provide explicit bounds on moments for nonlinear stochastic systems that are otherwise analytically intractable.
Study of photon correlation techniques for processing of laser velocimeter signals
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1977-01-01
The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.
Statistical mechanics of self-driven Carnot cycles.
Smith, E
1999-10-01
The spontaneous generation and finite-amplitude saturation of sound, in a traveling-wave thermoacoustic engine, are derived as properties of a second-order phase transition. It has previously been argued that this dynamical phase transition, called "onset," has an equivalent equilibrium representation, but the saturation mechanism and scaling were not computed. In this work, the sound modes implementing the engine cycle are coarse-grained and statistically averaged, in a partition function derived from microscopic dynamics on criteria of scale invariance. Self-amplification performed by the engine cycle is introduced through higher-order modal interactions. Stationary points and fluctuations of the resulting phenomenological Lagrangian are analyzed and related to background dynamical currents. The scaling of the stable sound amplitude near the critical point is derived and shown to arise universally from the interaction of finite-temperature disorder, with the order induced by self-amplification.
NASA Astrophysics Data System (ADS)
Kosciesza, M.; Blecki, J. S.; Parrot, M.
2014-12-01
We report the structure function analysis of changes found in electric field in the ELF range plasma turbulence registered in the ionosphere over epicenter region of major earthquakes with depth less than 40 km that took place during 6.5 years of the scientific mission of the DEMETER satellite. We compare the data for the earthquakes for which we found turbulence with events without any turbulent changes. The structure functions were calculated also for the Polar CUSP region and equatorial spread F region. Basic studies of the turbulent processes were conducted with use of higher order spectra and higher order statistics. The structure function analysis was performed to locate and check if there are intermittent behaviors in the ionospheres plasma over epicenter region of the earthquakes. These registrations are correlated with the plasma parameters measured onboard DEMETER satellite and with geomagnetic indices.
Planck 2015 results. XXII. A map of the thermal Sunyaev-Zeldovich effect
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chiang, H. C.; Christensen, P. R.; Churazov, E.; Clements, D. L.; Colombo, L. P. L.; Combet, C.; Comis, B.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Giard, M.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Holmes, W. A.; Hornstrup, A.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lacasa, F.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Macías-Pérez, J. F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Melchiorri, A.; Melin, J.-B.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savelainen, M.; Savini, G.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tramonte, D.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We have constructed all-sky Compton parameters maps, y-maps, of the thermal Sunyaev-Zeldovich (tSZ) effect by applying specifically tailored component separation algorithms to the 30 to 857 GHz frequency channel maps from the Planck satellite. These reconstructed y-maps are delivered as part of the Planck 2015 release. The y-maps are characterized in terms of noise properties and residual foreground contamination, mainly thermal dust emission at large angular scales, and cosmic infrared background and extragalactic point sources at small angular scales. Specific masks are defined to minimize foreground residuals and systematics. Using these masks, we compute the y-map angular power spectrum and higher order statistics. From these we conclude that the y-map is dominated by tSZ signal in the multipole range, 20 <ℓ< 600. We compare the measured tSZ power spectrum and higher order statistics to various physically motivated models and discuss the implications of our results in terms of cluster physics and cosmology.
Quantifying memory in complex physiological time-series.
Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R
2013-01-01
In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.
Mahapatra, Dwarikanath; Schueffler, Peter; Tielbeek, Jeroen A W; Buhmann, Joachim M; Vos, Franciscus M
2013-10-01
Increasing incidence of Crohn's disease (CD) in the Western world has made its accurate diagnosis an important medical challenge. The current reference standard for diagnosis, colonoscopy, is time-consuming and invasive while magnetic resonance imaging (MRI) has emerged as the preferred noninvasive procedure over colonoscopy. Current MRI approaches assess rate of contrast enhancement and bowel wall thickness, and rely on extensive manual segmentation for accurate analysis. We propose a supervised learning method for the identification and localization of regions in abdominal magnetic resonance images that have been affected by CD. Low-level features like intensity and texture are used with shape asymmetry information to distinguish between diseased and normal regions. Particular emphasis is laid on a novel entropy-based shape asymmetry method and higher-order statistics like skewness and kurtosis. Multi-scale feature extraction renders the method robust. Experiments on real patient data show that our features achieve a high level of accuracy and perform better than two competing methods.
Quantifying Memory in Complex Physiological Time-Series
Shirazi, Amir H.; Raoufy, Mohammad R.; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R.; Amodio, Piero; Jafari, G. Reza; Montagnese, Sara; Mani, Ali R.
2013-01-01
In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of “memory length” was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are ‘forgotten’ quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations. PMID:24039811
Higher order statistical moment application for solar PV potential analysis
NASA Astrophysics Data System (ADS)
Basri, Mohd Juhari Mat; Abdullah, Samizee; Azrulhisham, Engku Ahmad; Harun, Khairulezuan
2016-10-01
Solar photovoltaic energy could be as alternative energy to fossil fuel, which is depleting and posing a global warming problem. However, this renewable energy is so variable and intermittent to be relied on. Therefore the knowledge of energy potential is very important for any site to build this solar photovoltaic power generation system. Here, the application of higher order statistical moment model is being analyzed using data collected from 5MW grid-connected photovoltaic system. Due to the dynamic changes of skewness and kurtosis of AC power and solar irradiance distributions of the solar farm, Pearson system where the probability distribution is calculated by matching their theoretical moments with that of the empirical moments of a distribution could be suitable for this purpose. On the advantage of the Pearson system in MATLAB, a software programming has been developed to help in data processing for distribution fitting and potential analysis for future projection of amount of AC power and solar irradiance availability.
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
Fan, Rong; He, Tao; Qiu, Yan; Di, Yu-Lan; Xu, Su-yun; Li, Yao-yu
2012-01-01
To evaluate the differences of wavefront aberrations under cycloplegic, scotopic and photopic conditions. A total of 174 eyes of 105 patients were measured using the wavefront sensor (WaveScan® 3.62) under different pupil conditions: cycloplegic 8.58 ± 0.54 mm (6.4 mm - 9.5 mm), scotopic 7.53 ± 0.69 mm (5.7 mm - 9.1 mm) and photopic 6.08 ± 1.14 mm (4.1 mm - 8.8 mm). The pupil diameter, standard Zernike coefficients, root mean square of higher-order aberrations and dominant aberrations were compared between cycloplegic and scotopic conditions, and between scotopic and photopic conditions. The pupil diameter was 7.53 ± 0.69 mm under the scotopic condition, which reached the requirement of about 6.5 mm optical zone design in the wavefront-guided surgery and prevented measurement error due to the pupil centroid shift caused by mydriatics. Pharmacological pupil dilation induced increase of standard Zernike coefficients Z(3)(-3), Z(4)(0) and Z(5)(-5). The higher-order aberrations, third-order aberration, fourth-order aberration, fifth-order aberration, sixth-order aberration, and spherical aberration increased statistically significantly, compared to the scotopic condition (P<0.010). When the scotopic condition shifted to the photopic condition, the standard Zernike coefficients Z(4)(0), Z(4)(2), Z(6)(-4), Z(6)(-2), Z(6)(2) decreased and all the higher-order aberrations decreased statistically significantly (P<0.010), demonstrating that accommodative miosis can significantly improve vision under the photopic condition. Under the three conditions, the vertical coma aberration appears the most frequently within the dominant aberrations without significant effect by pupil size variance, and the proportion of spherical aberrations decreased with the decrease of the pupil size. The wavefront aberrations are significantly different under cycloplegic, scotopic and photopic conditions. Using the wavefront sensor (VISX WaveScan) to measure scotopic wavefront aberrations is feasible for the wavefront-guided refractive surgery.
Brenner, Luis F
2015-12-01
To evaluate the changes in corneal higher-order aberrations (HOAs) and their impact on corneal higher-order Strehl ratio after aberration-free ablation profile. Verter Institute, H. Olhos, São Paulo, Brazil. Prospective interventional study. Eyes that had aberration-free myopic ablation were divided into 3 groups, based on the spherical equivalent (SE). The corneal HOAs and higher-order Strehl ratios were calculated before surgery and 3 months after surgery. The postoperative uncorrected-distance visual acuity, corrected-distance visual acuity, and SE did not present statistical differences among groups (88 eyes, P > .05). For a 6 mm pupil, the corneal HOA showed a mean increase of 0.17 μm (range 0.39 to 0.56 μm) (P < .001) and the corneal higher-order Strehl ratio presented a reduction of 0.03 (from 0.25 to 0.22) (P = .001). The following consistent linear predictive model was obtained: corneal HOA induction = 1.474 - 0.032 × SE - 0.225 × OZ, where OZ is the optical zone (R(2) = 0.49, adjusted R(2) = 0.48, P < .001). The corneal HOAs and the higher-order Strehl ratios deteriorated after moderate and high myopic ablations. The worsening in corneal aberrations and optical quality were related to the magnitude of the intended correction and did not affect high-contrast visual performance. The OZ was the only modifiable parameter capable to restrain the optical quality loss. The author has no financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fernández, Leandro; Monbaliu, Jaak; Onorato, Miguel; Toffoli, Alessandro
2014-05-01
This research is focused on the study of nonlinear evolution of irregular wave fields in water of arbitrary depth by comparing field measurements and numerical simulations.It is now well accepted that modulational instability, known as one of the main mechanisms for the formation of rogue waves, induces strong departures from Gaussian statistics. However, whereas non-Gaussian properties are remarkable when wave fields follow one direction of propagation over an infinite water depth, wave statistics only weakly deviate from Gaussianity when waves spread over a range of different directions. Over finite water depth, furthermore, wave instability attenuates overall and eventually vanishes for relative water depths as low as kh=1.36 (where k is the wavenumber of the dominant waves and h the water depth). Recent experimental results, nonetheless, seem to indicate that oblique perturbations are capable of triggering and sustaining modulational instability even if kh<1.36. In this regard, the aim of this research is to understand whether the combined effect of directionality and finite water depth has a significant effect on wave statistics and particularly on the occurrence of extremes. For this purpose, numerical experiments have been performed solving the Euler equation of motion with the Higher Order Spectral Method (HOSM) and compared with data of short crested wave fields for different sea states observed at the Lake George (Australia). A comparative analysis of the statistical properties (i.e. density function of the surface elevation and its statistical moments skewness and kurtosis) between simulations and in-situ data provides a confrontation between the numerical developments and real observations in field conditions.
Detector noise statistics in the non-linear regime
NASA Technical Reports Server (NTRS)
Shopbell, P. L.; Bland-Hawthorn, J.
1992-01-01
The statistical behavior of an idealized linear detector in the presence of threshold and saturation levels is examined. It is assumed that the noise is governed by the statistical fluctuations in the number of photons emitted by the source during an exposure. Since physical detectors cannot have infinite dynamic range, our model illustrates that all devices have non-linear regimes, particularly at high count rates. The primary effect is a decrease in the statistical variance about the mean signal due to a portion of the expected noise distribution being removed via clipping. Higher order statistical moments are also examined, in particular, skewness and kurtosis. In principle, the expected distortion in the detector noise characteristics can be calibrated using flatfield observations with count rates matched to the observations. For this purpose, some basic statistical methods that utilize Fourier analysis techniques are described.
Nonlinear dynamic analysis of voices before and after surgical excision of vocal polyps
NASA Astrophysics Data System (ADS)
Zhang, Yu; McGilligan, Clancy; Zhou, Liang; Vig, Mark; Jiang, Jack J.
2004-05-01
Phase space reconstruction, correlation dimension, and second-order entropy, methods from nonlinear dynamics, are used to analyze sustained vowels generated by patients before and after surgical excision of vocal polyps. Two conventional acoustic perturbation parameters, jitter and shimmer, are also employed to analyze voices before and after surgery. Presurgical and postsurgical analyses of jitter, shimmer, correlation dimension, and second-order entropy are statistically compared. Correlation dimension and second-order entropy show a statistically significant decrease after surgery, indicating reduced complexity and higher predictability of postsurgical voice dynamics. There is not a significant postsurgical difference in shimmer, although jitter shows a significant postsurgical decrease. The results suggest that jitter and shimmer should be applied to analyze disordered voices with caution; however, nonlinear dynamic methods may be useful for analyzing abnormal vocal function and quantitatively evaluating the effects of surgical excision of vocal polyps.
NASA Astrophysics Data System (ADS)
Berg, Jacob; Patton, Edward G.; Sullivan, Peter S.
2017-11-01
The effect of mesh resolution and size on shear driven atmospheric boundary layers in a stable stratified environment is investigated with the NCAR pseudo-spectral LES model (J. Atmos. Sci. v68, p2395, 2011 and J. Atmos. Sci. v73, p1815, 2016). The model applies FFT in the two horizontal directions and finite differencing in the vertical direction. With vanishing heat flux at the surface and a capping inversion entraining potential temperature into the boundary layer the situation is often called the conditional neutral atmospheric boundary layer (ABL). Due to its relevance in high wind applications such as wind power meteorology, we emphasize on second order statistics important for wind turbines including spectral information. The simulations range from mesh sizes of 643 to 10243 grid points. Due to the non-stationarity of the problem, different simulations are compared at equal eddy-turnover times. Whereas grid convergence is mostly achieved in the middle portion of the ABL, statistics close to the surface of the ABL, where the presence of the ground limits the growth of the energy containing eddies, second order statistics are not converged on the studies meshes. Higher order structure functions also reveal non-Gaussian statistics highly dependent on the resolution.
Exact goodness-of-fit tests for Markov chains.
Besag, J; Mondal, D
2013-06-01
Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.
Towards bridging the gap between climate change projections and maize producers in South Africa
NASA Astrophysics Data System (ADS)
Landman, Willem A.; Engelbrecht, Francois; Hewitson, Bruce; Malherbe, Johan; van der Merwe, Jacobus
2018-05-01
Multi-decadal regional projections of future climate change are introduced into a linear statistical model in order to produce an ensemble of austral mid-summer maximum temperature simulations for southern Africa. The statistical model uses atmospheric thickness fields from a high-resolution (0.5° × 0.5°) reanalysis-forced simulation as predictors in order to develop a linear recalibration model which represents the relationship between atmospheric thickness fields and gridded maximum temperatures across the region. The regional climate model, the conformal-cubic atmospheric model (CCAM), projects maximum temperatures increases over southern Africa to be in the order of 4 °C under low mitigation towards the end of the century or even higher. The statistical recalibration model is able to replicate these increasing temperatures, and the atmospheric thickness-maximum temperature relationship is shown to be stable under future climate conditions. Since dry land crop yields are not explicitly simulated by climate models but are sensitive to maximum temperature extremes, the effect of projected maximum temperature change on dry land crops of the Witbank maize production district of South Africa, assuming other factors remain unchanged, is then assessed by employing a statistical approach similar to the one used for maximum temperature projections.
NASA Astrophysics Data System (ADS)
Cyganek, Boguslaw; Smolka, Bogdan
2015-02-01
In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baers, L.B.; Gutierrez, T.R.; Mendoza, R.A.
1993-08-01
The second (conventional variance or Campbell signal) , , , the third , and the modified fourth order [minus] 3*[sup 2] etc. central signal moments associated with the amplified (K) and filtered currents [i[sub 1], i[sub 2], x = K * (i[sub 2]-),] from two electrodes of an ex-core neutron sensitive fission detector have been measured versus the reactor power of the 1 MW TRIGA reactor in Mexico City. Two channels of a high speed (400 kHz) multiplexing data sampler and A/D converter with 12 bit resolution and one megawords buffer memory were used. The data were further retrieved intomore » a PC and estimates for auto- and cross-correlation moments up to the fifth order, coherence (/[radical]), skewness (/([radical]/)[sup 3]), excess (/[sup 2] - 3) etc. quantities were calculated off-line. A five mode operation of the detector was achieved including the conventional counting rates and currents in agreement with the theory and the authors previous results with analogue techniques. The signals were proportional to the neutron flux and reactor power in some flux ranges. The suppression of background noise is improved and the lower limit of the measurement range is extended as the order of moment is increased, in agreement with the theory. On the other hand the statistical uncertainty is increased. At increasing flux levels it was statistically more difficult to obtain flux estimates based on the higher order ([>=]3) moments.« less
A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume
NASA Astrophysics Data System (ADS)
Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration
2017-11-01
An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.
Fernandez, Elena V; McDaniel, Jennifer A; Carroll, Norman V
2016-11-01
Higher medication adherence is associated with positive health outcomes, including reduction in hospitalizations and costs, and many interventions have been implemented to increase patient adherence. To determine whether patients experience higher medication adherence by using mail-order or retail pharmacies. Articles pertaining to retail and mail-order pharmacies and medication adherence were collected from 3 literature databases: MEDLINE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), and International Pharmaceutical Abstracts (IPA). Searches were created for each database and articles were compiled. Articles were screened for exclusion factors, and final articles (n=15) comparing medication adherence in patients utilizing mail and retail pharmacies were analyzed. For each study, various factors were identified including days supply, patients' out-of-pocket costs, prior adherence behavior, therapeutic class, measure of adherence, limitations, and results. Studies were then categorized by disease state, and relevant information from each study was compared and contrasted. The majority of studies-14 out of the 15 reviewed-supported higher adherence through the mail-order dispensing channel rather than through retail pharmacies. There are a number of reasons for the differences in adherence between the channels. Study patients who used mail-order pharmacies were more likely to have substantially higher prior adherence behavior, socioeconomic status, and days supply of medicines received and were likely to be offered financial incentives to use mail-order. The few studies that attempted to statistically control for these factors also found that patients using mail-order services were more adherent but the size of the differences was smaller. The extent to which these results indicate an inherent adherence advantage of mail-order pharmacy (as distinct from adherence benefits due to greater days supply, lower copays, or more adherent patients selecting mail-order pharmacies) depends on how well the statistical controls adjusted for the substantial differences between the mail and retail samples. While the research strongly indicates that consumers who use mail-order pharmacies are more likely to be adherent, more research is needed before it can be conclusively determined that use of mail-order pharmacies causes higher adherence. No outside funding supported this study. Fernandez was partially funded by a Virginia Commonwealth University School of Pharmacy PharmD/PhD Summer Fellowship for work on this project. The authors declare no other potential conflicts of interest. Study concept and design were contributed by Carroll and Fernandez. Fernandez took the lead in data collection, along with Carroll and McDaniel, and data interpretation was performed by Carroll and Fernandez. The manuscript was written and revised by Carroll and Fernandez, with assistance from McDaniel.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Estimation of the geochemical threshold and its statistical significance
Miesch, A.T.
1981-01-01
A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.
Higher-order scene statistics of breast images
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Sohl-Dickstein, Jascha N.; Olshausen, Bruno A.; Eckstein, Miguel P.; Boone, John M.
2009-02-01
Researchers studying human and computer vision have found description and construction of these systems greatly aided by analysis of the statistical properties of naturally occurring scenes. More specifically, it has been found that receptive fields with directional selectivity and bandwidth properties similar to mammalian visual systems are more closely matched to the statistics of natural scenes. It is argued that this allows for sparse representation of the independent components of natural images [Olshausen and Field, Nature, 1996]. These theories have important implications for medical image perception. For example, will a system that is designed to represent the independent components of natural scenes, where objects occlude one another and illumination is typically reflected, be appropriate for X-ray imaging, where features superimpose on one another and illumination is transmissive? In this research we begin to examine these issues by evaluating higher-order statistical properties of breast images from X-ray projection mammography (PM) and dedicated breast computed tomography (bCT). We evaluate kurtosis in responses of octave bandwidth Gabor filters applied to PM and to coronal slices of bCT scans. We find that kurtosis in PM rises and quickly saturates for filter center frequencies with an average value above 0.95. By contrast, kurtosis in bCT peaks near 0.20 cyc/mm with kurtosis of approximately 2. Our findings suggest that the human visual system may be tuned to represent breast tissue more effectively in bCT over a specific range of spatial frequencies.
Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K
2016-08-01
Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.
Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato
2017-01-27
Previous neural studies have supported the hypothesis that statistical learning mechanisms are used broadly across different domains such as language and music. However, these studies have only investigated a single aspect of statistical learning at a time, such as recognizing word boundaries or learning word order patterns. In this study, we neutrally investigated how the two levels of statistical learning for recognizing word boundaries and word ordering could be reflected in neuromagnetic responses and how acquired statistical knowledge is reorganised when the syntactic rules are revised. Neuromagnetic responses to the Japanese-vowel sequence (a, e, i, o, and u), presented every .45s, were recorded from 14 right-handed Japanese participants. The vowel order was constrained by a Markov stochastic model such that five nonsense words (aue, eao, iea, oiu, and uoi) were chained with an either-or rule: the probability of the forthcoming word was statistically defined (80% for one word; 20% for the other word) by the most recent two words. All of the word transition probabilities (80% and 20%) were switched in the middle of the sequence. In the first and second quarters of the sequence, the neuromagnetic responses to the words that appeared with higher transitional probability were significantly reduced compared with those that appeared with a lower transitional probability. After switching the word transition probabilities, the response reduction was replicated in the last quarter of the sequence. The responses to the final vowels in the words were significantly reduced compared with those to the initial vowels in the last quarter of the sequence. The results suggest that both within-word and between-word statistical learning are reflected in neural responses. The present study supports the hypothesis that listeners learn larger structures such as phrases first, and they subsequently extract smaller structures, such as words, from the learned phrases. The present study provides the first neurophysiological evidence that the correction of statistical knowledge requires more time than the acquisition of new statistical knowledge. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Melott, A. L.; Buchert, T.; Weib, A. G.
1995-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of scales. The Lagrangian theory of gravitational instability of Friedmann-Lemaitre cosmogonies is compared with numerical simulations. We study the dynamics of hierarchical models as a second step. In the first step we analyzed the performance of the Lagrangian schemes for pancake models, the difference being that in the latter models the initial power spectrum is truncated. This work probed the quasi-linear and weakly non-linear regimes. We here explore whether the results found for pancake models carry over to hierarchical models which are evolved deeply into the non-linear regime. We smooth the initial data by using a variety of filter types and filter scales in order to determine the optimal performance of the analytical models, as has been done for the 'Zel'dovich-approximation' - hereafter TZA - in previous work. We find that for spectra with negative power-index the second-order scheme performs considerably better than TZA in terms of statistics which probe the dynamics, and slightly better in terms of low-order statistics like the power-spectrum. However, in contrast to the results found for pancake models, where the higher-order schemes get worse than TZA at late non-linear stages and on small scales, we here find that the second-order model is as robust as TZA, retaining the improvement at later stages and on smaller scales. In view of these results we expect that the second-order truncated Lagrangian model is especially useful for the modelling of standard dark matter models such as Hot-, Cold-, and Mixed-Dark-Matter.
Birth Order and health: major issues.
Elliott, B A
1992-08-01
Birth Order has been described as a variable with a complex relationship to child and adult outcomes. A review of the medical literature over the past 5 years identified 20 studies that investigated the relationship between Birth Order and a health outcome. Only one of the studies established a relationship between Birth Order and a health outcome: third and fourth-born children have a higher incidence of accidents that result in hospitalization. The other demonstrated relationships are each explained by intervening variables or methodological limitations. Although Birth Order is not a strongly independent explanatory factor in understanding health outcomes, it is an important marker variable. Statistically significant relationships between Birth Order and health outcomes yield insights into the ways a family influences an individual's health.
The 1993 Mississippi river flood: A one hundred or a one thousand year event?
Malamud, B.D.; Turcotte, D.L.; Barton, C.C.
1996-01-01
Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.
Random walker in temporally deforming higher-order potential forces observed in a financial crisis.
Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako
2009-11-01
Basic peculiarities of market price fluctuations are known to be well described by a recently developed random-walk model in a temporally deforming quadratic potential force whose center is given by a moving average of past price traces [M. Takayasu, T. Mizuno, and H. Takayasu, Physica A 370, 91 (2006)]. By analyzing high-frequency financial time series of exceptional events, such as bubbles and crashes, we confirm the appearance of higher-order potential force in the markets. We show statistical significance of its existence by applying the information criterion. This time series analysis is expected to be applied widely for detecting a nonstationary symptom in random phenomena.
A global goodness-of-fit statistic for Cox regression models.
Parzen, M; Lipsitz, S R
1999-06-01
In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.
Higher Order Aberration and Astigmatism in Children with Hyperopic Amblyopia
Choi, Seung Kwon
2016-01-01
Purpose To investigate the changes in corneal higher-order aberration (HOA) during amblyopia treatment and the correlation between HOA and astigmatism in hyperopic amblyopia children. Methods In this retrospective study, a total of 72 eyes from 72 patients ranging in age from 38 to 161 months were included. Patients were divided into two groups based on the degree of astigmatism. Corneal HOA was measured using a KR-1W aberrometer at the initial visit and at 3-, 6-, and 12-month follow-ups. Correlation analysis was performed to assess the association between HOA and astigmatism. Results A total of 72 patients were enrolled in this study, 37 of which were classified as belonging to the higher astigmatism group, while 35 were assigned to the lower astigmatism group. There was a statistically significant difference in success rate between the higher and lower astigmatism groups. In both groups, all corneal HOAs were significantly reduced during amblyopia treatment. When comparing the two groups, a significant difference in coma HOA at the 12-month follow-up was detected (p = 0.043). In the Pearson correlation test, coma HOA at the 12-month follow-up demonstrated a statistically significant correlation with astigmatism and a stronger correlation with astigmatism in the higher astigmatism group than in the lower astigmatism group (coefficient values, 0.383 and 0.284 as well as p = 0.021 and p = 0.038, respectively). Conclusions HOA, particularly coma HOA, correlated with astigmatism and could exert effects in cases involving hyperopic amblyopia. PMID:26865804
Higher Order Aberration and Astigmatism in Children with Hyperopic Amblyopia.
Choi, Seung Kwon; Chang, Ji Woong
2016-02-01
To investigate the changes in corneal higher-order aberration (HOA) during amblyopia treatment and the correlation between HOA and astigmatism in hyperopic amblyopia children. In this retrospective study, a total of 72 eyes from 72 patients ranging in age from 38 to 161 months were included. Patients were divided into two groups based on the degree of astigmatism. Corneal HOA was measured using a KR-1W aberrometer at the initial visit and at 3-, 6-, and 12-month follow-ups. Correlation analysis was performed to assess the association between HOA and astigmatism. A total of 72 patients were enrolled in this study, 37 of which were classified as belonging to the higher astigmatism group, while 35 were assigned to the lower astigmatism group. There was a statistically significant difference in success rate between the higher and lower astigmatism groups. In both groups, all corneal HOAs were significantly reduced during amblyopia treatment. When comparing the two groups, a significant difference in coma HOA at the 12-month follow-up was detected (p = 0.043). In the Pearson correlation test, coma HOA at the 12-month follow-up demonstrated a statistically significant correlation with astigmatism and a stronger correlation with astigmatism in the higher astigmatism group than in the lower astigmatism group (coefficient values, 0.383 and 0.284 as well as p = 0.021 and p = 0.038, respectively). HOA, particularly coma HOA, correlated with astigmatism and could exert effects in cases involving hyperopic amblyopia.
Increasing the lensing figure of merit through higher order convergence moments
NASA Astrophysics Data System (ADS)
Vicinanza, Martina; Cardone, Vincenzo F.; Maoli, Roberto; Scaramella, Roberto; Er, Xinzhong
2018-01-01
The unprecedented quality, the increased data set, and the wide area of ongoing and near future weak lensing surveys allows one to move beyond the standard two points statistics, thus making it worthwhile to investigate higher order probes. As an interesting step toward this direction, we explore the use of higher order moments (HOM) of the convergence field as a way to increase the lensing figure of merit (FoM). To this end, we rely on simulated convergence to first show that HOM can be measured and calibrated so that it is indeed possible to predict them for a given cosmological model provided suitable nuisance parameters are introduced and then marginalized over. We then forecast the accuracy on cosmological parameters from the use of HOM alone and in combination with standard shear power spectra tomography. It turns out that HOM allow one to break some common degeneracies, thus significantly boosting the overall FoM. We also qualitatively discuss possible systematics and how they can be dealt with.
NASA Astrophysics Data System (ADS)
Leung, Juliana Y.; Srinivasan, Sanjay
2016-09-01
Modeling transport process at large scale requires proper scale-up of subsurface heterogeneity and an understanding of its interaction with the underlying transport mechanisms. A technique based on volume averaging is applied to quantitatively assess the scaling characteristics of effective mass transfer coefficient in heterogeneous reservoir models. The effective mass transfer coefficient represents the combined contribution from diffusion and dispersion to the transport of non-reactive solute particles within a fluid phase. Although treatment of transport problems with the volume averaging technique has been published in the past, application to geological systems exhibiting realistic spatial variability remains a challenge. Previously, the authors developed a new procedure where results from a fine-scale numerical flow simulation reflecting the full physics of the transport process albeit over a sub-volume of the reservoir are integrated with the volume averaging technique to provide effective description of transport properties. The procedure is extended such that spatial averaging is performed at the local-heterogeneity scale. In this paper, the transport of a passive (non-reactive) solute is simulated on multiple reservoir models exhibiting different patterns of heterogeneities, and the scaling behavior of effective mass transfer coefficient (Keff) is examined and compared. One such set of models exhibit power-law (fractal) characteristics, and the variability of dispersion and Keff with scale is in good agreement with analytical expressions described in the literature. This work offers an insight into the impacts of heterogeneity on the scaling of effective transport parameters. A key finding is that spatial heterogeneity models with similar univariate and bivariate statistics may exhibit different scaling characteristics because of the influence of higher order statistics. More mixing is observed in the channelized models with higher-order continuity. It reinforces the notion that the flow response is influenced by the higher-order statistical description of heterogeneity. An important implication is that when scaling-up transport response from lab-scale results to the field scale, it is necessary to account for the scale-up of heterogeneity. Since the characteristics of higher-order multivariate distributions and large-scale heterogeneity are typically not captured in small-scale experiments, a reservoir modeling framework that captures the uncertainty in heterogeneity description should be adopted.
Ramanathan, Arvind; Savol, Andrej J.; Agarwal, Pratul K.; Chennubhotla, Chakra S.
2012-01-01
Biomolecular simulations at milli-second and longer timescales can provide vital insights into functional mechanisms. Since post-simulation analyses of such large trajectory data-sets can be a limiting factor in obtaining biological insights, there is an emerging need to identify key dynamical events and relating these events to the biological function online, that is, as simulations are progressing. Recently, we have introduced a novel computational technique, quasi-anharmonic analysis (QAA) (PLoS One 6(1): e15827), for partitioning the conformational landscape into a hierarchy of functionally relevant sub-states. The unique capabilities of QAA are enabled by exploiting anharmonicity in the form of fourth-order statistics for characterizing atomic fluctuations. In this paper, we extend QAA for analyzing long time-scale simulations online. In particular, we present HOST4MD - a higher-order statistical toolbox for molecular dynamics simulations, which (1) identifies key dynamical events as simulations are in progress, (2) explores potential sub-states and (3) identifies conformational transitions that enable the protein to access those sub-states. We demonstrate HOST4MD on micro-second time-scale simulations of the enzyme adenylate kinase in its apo state. HOST4MD identifies several conformational events in these simulations, revealing how the intrinsic coupling between the three sub-domains (LID, CORE and NMP) changes during the simulations. Further, it also identifies an inherent asymmetry in the opening/closing of the two binding sites. We anticipate HOST4MD will provide a powerful and extensible framework for detecting biophysically relevant conformational coordinates from long time-scale simulations. PMID:22733562
NASA Astrophysics Data System (ADS)
Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun
2014-03-01
Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.
Computer-Based Instruction and Health Professions Education: A Meta-Analysis of Outcomes.
ERIC Educational Resources Information Center
Cohen, Peter A.; Dacanay, Lakshmi S.
1992-01-01
The meta-analytic techniques of G. V. Glass were used to statistically integrate findings from 47 comparative studies on computer-based instruction (CBI) in health professions education. A clear majority of the studies favored CBI over conventional methods of instruction. Results show higher-order applications of computers to be especially…
Statistical learning of music- and language-like sequences and tolerance for spectral shifts.
Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato
2015-02-01
In our previous study (Daikoku, Yatomi, & Yumoto, 2014), we demonstrated that the N1m response could be a marker for the statistical learning process of pitch sequence, in which each tone was ordered by a Markov stochastic model. The aim of the present study was to investigate how the statistical learning of music- and language-like auditory sequences is reflected in the N1m responses based on the assumption that both language and music share domain generality. By using vowel sounds generated by a formant synthesizer, we devised music- and language-like auditory sequences in which higher-ordered transitional rules were embedded according to a Markov stochastic model by controlling fundamental (F0) and/or formant frequencies (F1-F2). In each sequence, F0 and/or F1-F2 were spectrally shifted in the last one-third of the tone sequence. Neuromagnetic responses to the tone sequences were recorded from 14 right-handed normal volunteers. In the music- and language-like sequences with pitch change, the N1m responses to the tones that appeared with higher transitional probability were significantly decreased compared with the responses to the tones that appeared with lower transitional probability within the first two-thirds of each sequence. Moreover, the amplitude difference was even retained within the last one-third of the sequence after the spectral shifts. However, in the language-like sequence without pitch change, no significant difference could be detected. The pitch change may facilitate the statistical learning in language and music. Statistically acquired knowledge may be appropriated to process altered auditory sequences with spectral shifts. The relative processing of spectral sequences may be a domain-general auditory mechanism that is innate to humans. Copyright © 2014 Elsevier Inc. All rights reserved.
Phase dependence of the unnormalized second-order photon correlation function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciornea, V.; Bardetski, P.; Macovei, M. A., E-mail: macovei@phys.asm.md
2016-10-15
We investigate the resonant quantum dynamics of a multi-qubit ensemble in a microcavity. Both the quantum-dot subsystem and the microcavity mode are pumped coherently. We find that the microcavity photon statistics depends on the phase difference of the driving lasers, which is not the case for the photon intensity at resonant driving. This way, one can manipulate the two-photon correlations. In particular, higher degrees of photon correlations and, eventually, stronger intensities are obtained. Furthermore, the microcavity photon statistics exhibits steady-state oscillatory behaviors as well as asymmetries.
Statistical characterization of short wind waves from stereo images of the sea surface
NASA Astrophysics Data System (ADS)
Mironov, Alexey; Yurovskaya, Maria; Dulov, Vladimir; Hauser, Danièle; Guérin, Charles-Antoine
2013-04-01
We propose a methodology to extract short-scale statistical characteristics of the sea surface topography by means of stereo image reconstruction. The possibilities and limitations of the technique are discussed and tested on a data set acquired from an oceanographic platform at the Black Sea. The analysis shows that reconstruction of the topography based on stereo method is an efficient way to derive non-trivial statistical properties of surface short- and intermediate-waves (say from 1 centimer to 1 meter). Most technical issues pertaining to this type of datasets (limited range of scales, lacunarity of data or irregular sampling) can be partially overcome by appropriate processing of the available points. The proposed technique also allows one to avoid linear interpolation which dramatically corrupts properties of retrieved surfaces. The processing technique imposes that the field of elevation be polynomially detrended, which has the effect of filtering out the large scales. Hence the statistical analysis can only address the small-scale components of the sea surface. The precise cut-off wavelength, which is approximatively half the patch size, can be obtained by applying a high-pass frequency filter on the reference gauge time records. The results obtained for the one- and two-points statistics of small-scale elevations are shown consistent, at least in order of magnitude, with the corresponding gauge measurements as well as other experimental measurements available in the literature. The calculation of the structure functions provides a powerful tool to investigate spectral and statistical properties of the field of elevations. Experimental parametrization of the third-order structure function, the so-called skewness function, is one of the most important and original outcomes of this study. This function is of primary importance in analytical scattering models from the sea surface and was up to now unavailable in field conditions. Due to the lack of precise reference measurements for the small-scale wave field, we could not quantify exactly the accuracy of the retrieval technique. However, it appeared clearly that the obtained accuracy is good enough for the estimation of second-order statistical quantities (such as the correlation function), acceptable for third-order quantities (such as the skwewness function) and insufficient for fourth-order quantities (such as kurtosis). Therefore, the stereo technique in the present stage should not be thought as a self-contained universal tool to characterize the surface statistics. Instead, it should be used in conjunction with other well calibrated but sparse reference measurement (such as wave gauges) for cross-validation and calibration. It then completes the statistical analysis in as much as it provides a snapshot of the three-dimensional field and allows for the evaluation of higher-order spatial statistics.
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages.
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure-regularities arising in an ordered series of syllable timings-testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint. PMID:27994544
NASA Astrophysics Data System (ADS)
Botter Martins, Samuel; Vallin Spina, Thiago; Yasuda, Clarissa; Falcão, Alexandre X.
2017-02-01
Statistical Atlases have played an important role towards automated medical image segmentation. However, a challenge has been to make the atlas more adaptable to possible errors in deformable registration of anomalous images, given that the body structures of interest for segmentation might present significant differences in shape and texture. Recently, deformable registration errors have been accounted by a method that locally translates the statistical atlas over the test image, after registration, and evaluates candidate objects from a delineation algorithm in order to choose the best one as final segmentation. In this paper, we improve its delineation algorithm and extend the model to be a multi-object statistical atlas, built from control images and adaptable to anomalous images, by incorporating a texture classifier. In order to provide a first proof of concept, we instantiate the new method for segmenting, object-by-object and all objects simultaneously, the left and right brain hemispheres, and the cerebellum, without the brainstem, and evaluate it on MRT1-images of epilepsy patients before and after brain surgery, which removed portions of the temporal lobe. The results show efficiency gain with statistically significant higher accuracy, using the mean Average Symmetric Surface Distance, with respect to the original approach.
OPTICS OF CONDUCTIVE KERATOPLASTY: IMPLICATIONS FOR PRESBYOPIA MANAGEMENT
Hersh, Peter S
2005-01-01
Purpose To define the corneal optics of conductive keratoplasty (CK) and assess the clinical implications for hyperopia and presbyopia management. Methods Four analyses were done. (1) Multifocal effects: In a prospective study of CK, uncorrected visual acuity (UCVA) for a given refractive error in 72 postoperative eyes was compared to control eyes. (2) Surgically induced astigmatism (SIA): 203 eyes were analyzed for magnitude and axis of SIA. (3) Higher-order optical aberrations: Corneal higher-order optical aberrations were assessed for 36 eyes after CK and a similar patient population after hyperopic laser in situ keratomileusis (LASIK). (4) Presbyopia clinical trial: Visual acuity, refractive result, and patient questionnaires were analyzed for 150 subjects in a prospective, multicenter clinical trial of presbyopia management with CK. Results (1) 63% and 82% of eyes after CK had better UCVA at distance and near, respectively, than controls. (2) The mean SIA was 0.23 diopter (D) steepening at 175° (P < .001); mean magnitude was 0.66 D (SD, 0.43 D). (3) After CK, composite fourth- and sixth-order spherical aberration increased; change in (Z12) spherical aberration alone was not statistically significant. When compared to hyperopic LASIK, there was a statistically significant increase in composite fourth- and sixth-order spherical aberration (P < .01) and spherical aberration (Z12) alone (P < .02); spherical aberration change was more prolate after CK. (4) After the CK monovision procedure, 80% of patients had J3 or better binocular UCVA at near; 84% of patients were satisfied. Satisfaction was associated with near UCVA of J3 or better in the monovision eye (P = .001) and subjectively good postoperative depth perception (P = .038). Conclusions CK seems to produce functional corneal multifocality with definable introduction of SIA and higher-order optical aberrations, and development of a more prolate corneal contour. These optical factors may militate toward improved near vision function. PMID:17057812
Simulation of Ametropic Human Eyes
NASA Astrophysics Data System (ADS)
Tan, Bo; Chen, Ying-Ling; Lewis, James W. L.
2004-11-01
The computational simulation of the performance of human eyes is complex because the optical parameters of the eye depend on many factors, including age, gender, race, refractive status (accommodation and near- or far-sightedness). This task is made more difficult by the inadequacy of the population statistical characteristics of these parameters. Previously we simulated ametropic (near- or far-sighted) eyes using three independent variables: the axial length of the eye, the corneal surface curvature, and the intraocular refractive index gradient. The prescription for the correction of an ametropic eye is determined by its second-order coefficients of the wavefront aberrations. These corrections are typically achieved using contact lens, spectacle lens, or laser surgery (LASIK). However, the higher order aberrations, which are not corrected and are likely complicated or enhanced by the lower-order correction, could be important for visual performance in a darkened environment. In this paper, we investigate the higher order wavefront aberrations of synthetic ametropic eyes and compare results with measured data published in the past decade. The behavior of three types of ametropes is discussed.
Higher-order correlations for fluctuations in the presence of fields.
Boer, A; Dumitru, S
2002-10-01
The higher-order moments of the fluctuations for thermodynamic systems in the presence of fields are investigated in the framework of a theoretical method. The method uses a generalized statistical ensemble consistent with an adequate expression for the internal energy. The applications refer to the case of a system in a magnetoquasistatic field. In the case of linear magnetic media, one finds that, for the description of the magnetic induction fluctuations, the Gaussian approximation is satisfactory. For nonlinear media, the corresponding fluctuations are non-Gaussian, having a non-null asymmetry. Furthermore, the respective fluctuations have characteristics of leptokurtic, mesokurtic and platykurtic type, depending on the value of the magnetic field strength as compared with a scaling factor of the magnetization curve.
NASA Technical Reports Server (NTRS)
Kerr, R. A.
1983-01-01
In a three dimensional simulation higher order derivative correlations, including skewness and flatness factors, are calculated for velocity and passive scalar fields and are compared with structures in the flow. The equations are forced to maintain steady state turbulence and collect statistics. It is found that the scalar derivative flatness increases much faster with Reynolds number than the velocity derivative flatness, and the velocity and mixed derivative skewness do not increase with Reynolds number. Separate exponents are found for the various fourth order velocity derivative correlations, with the vorticity flatness exponent the largest. Three dimensional graphics show strong alignment between the vorticity, rate of strain, and scalar-gradient fields. The vorticity is concentrated in tubes with the scalar gradient and the largest principal rate of strain aligned perpendicular to the tubes. Velocity spectra, in Kolmogorov variables, collapse to a single curve and a short minus 5/3 spectral regime is observed.
ERIC Educational Resources Information Center
Barnes, Jessica J.; Woolrich, Mark W.; Baker, Kate; Colclough, Giles L.; Astle, Duncan E.
2016-01-01
Functional connectivity is the statistical association of neuronal activity time courses across distinct brain regions, supporting specific cognitive processes. This coordination of activity is likely to be highly important for complex aspects of cognition, such as the communication of fluctuating task goals from higher-order control regions to…
ERIC Educational Resources Information Center
Du, Wenchong; Kelly, Steve W.
2013-01-01
The present study examines implicit sequence learning in adult dyslexics with a focus on comparing sequence transitions with different statistical complexities. Learning of a 12-item deterministic sequence was assessed in 12 dyslexic and 12 non-dyslexic university students. Both groups showed equivalent standard reaction time increments when the…
ERIC Educational Resources Information Center
Kuntze, Sebastian; Aizikovitsh-Udi, Einav; Clarke, David
2017-01-01
Stimulating thinking related to mathematical content is the focus of many tasks in the mathematics classroom. Beyond such content-related thinking, promoting forms of higher order thinking is among the goals of mathematics instruction as well. So-called hybrid tasks focus on combining both goals: they aim at fostering mathematical thinking and…
Daikoku, Tatsuya
2018-06-19
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of intention to learn and awareness of what has been learned. SL is an interdisciplinary notion that incorporates information technology, artificial intelligence, musicology, and linguistics, as well as psychology and neuroscience. A body of recent study has suggested that SL can be reflected in neurophysiological responses based on the framework of information theory. This paper reviews a range of work on SL in adults and children that suggests overlapping and independent neural correlations in music and language, and that indicates disability of SL. Furthermore, this article discusses the relationships between the order of transitional probabilities (TPs) (i.e., hierarchy of local statistics) and entropy (i.e., global statistics) regarding SL strategies in human's brains; claims importance of information-theoretical approaches to understand domain-general, higher-order, and global SL covering both real-world music and language; and proposes promising approaches for the application of therapy and pedagogy from various perspectives of psychology, neuroscience, computational studies, musicology, and linguistics.
Entropy Based Genetic Association Tests and Gene-Gene Interaction Tests
de Andrade, Mariza; Wang, Xin
2011-01-01
In the past few years, several entropy-based tests have been proposed for testing either single SNP association or gene-gene interaction. These tests are mainly based on Shannon entropy and have higher statistical power when compared to standard χ2 tests. In this paper, we extend some of these tests using a more generalized entropy definition, Rényi entropy, where Shannon entropy is a special case of order 1. The order λ (>0) of Rényi entropy weights the events (genotype/haplotype) according to their probabilities (frequencies). Higher λ places more emphasis on higher probability events while smaller λ (close to 0) tends to assign weights more equally. Thus, by properly choosing the λ, one can potentially increase the power of the tests or the p-value level of significance. We conducted simulation as well as real data analyses to assess the impact of the order λ and the performance of these generalized tests. The results showed that for dominant model the order 2 test was more powerful and for multiplicative model the order 1 or 2 had similar power. The analyses indicate that the choice of λ depends on the underlying genetic model and Shannon entropy is not necessarily the most powerful entropy measure for constructing genetic association or interaction tests. PMID:23089811
Multiple choice questions can be designed or revised to challenge learners' critical thinking.
Tractenberg, Rochelle E; Gushta, Matthew M; Mulroney, Susan E; Weissinger, Peggy A
2013-12-01
Multiple choice (MC) questions from a graduate physiology course were evaluated by cognitive-psychology (but not physiology) experts, and analyzed statistically, in order to test the independence of content expertise and cognitive complexity ratings of MC items. Integration of higher order thinking into MC exams is important, but widely known to be challenging-perhaps especially when content experts must think like novices. Expertise in the domain (content) may actually impede the creation of higher-complexity items. Three cognitive psychology experts independently rated cognitive complexity for 252 multiple-choice physiology items using a six-level cognitive complexity matrix that was synthesized from the literature. Rasch modeling estimated item difficulties. The complexity ratings and difficulty estimates were then analyzed together to determine the relative contributions (and independence) of complexity and difficulty to the likelihood of correct answers on each item. Cognitive complexity was found to be statistically independent of difficulty estimates for 88 % of items. Using the complexity matrix, modifications were identified to increase some item complexities by one level, without affecting the item's difficulty. Cognitive complexity can effectively be rated by non-content experts. The six-level complexity matrix, if applied by faculty peer groups trained in cognitive complexity and without domain-specific expertise, could lead to improvements in the complexity targeted with item writing and revision. Targeting higher order thinking with MC questions can be achieved without changing item difficulties or other test characteristics, but this may be less likely if the content expert is left to assess items within their domain of expertise.
Hoyle, R H
1991-02-01
Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.
Ramanathan, Arvind; Savol, Andrej J; Agarwal, Pratul K; Chennubhotla, Chakra S
2012-11-01
Biomolecular simulations at millisecond and longer time-scales can provide vital insights into functional mechanisms. Because post-simulation analyses of such large trajectory datasets can be a limiting factor in obtaining biological insights, there is an emerging need to identify key dynamical events and relating these events to the biological function online, that is, as simulations are progressing. Recently, we have introduced a novel computational technique, quasi-anharmonic analysis (QAA) (Ramanathan et al., PLoS One 2011;6:e15827), for partitioning the conformational landscape into a hierarchy of functionally relevant sub-states. The unique capabilities of QAA are enabled by exploiting anharmonicity in the form of fourth-order statistics for characterizing atomic fluctuations. In this article, we extend QAA for analyzing long time-scale simulations online. In particular, we present HOST4MD--a higher-order statistical toolbox for molecular dynamics simulations, which (1) identifies key dynamical events as simulations are in progress, (2) explores potential sub-states, and (3) identifies conformational transitions that enable the protein to access those sub-states. We demonstrate HOST4MD on microsecond timescale simulations of the enzyme adenylate kinase in its apo state. HOST4MD identifies several conformational events in these simulations, revealing how the intrinsic coupling between the three subdomains (LID, CORE, and NMP) changes during the simulations. Further, it also identifies an inherent asymmetry in the opening/closing of the two binding sites. We anticipate that HOST4MD will provide a powerful and extensible framework for detecting biophysically relevant conformational coordinates from long time-scale simulations. Copyright © 2012 Wiley Periodicals, Inc.
Aslanides, Ioannis M.; Padroni, Sara; Arba-Mosquera, Samuel
2012-01-01
Purpose To evaluate mid-term refractive outcomes and higher order aberrations of aspheric PRK for low, moderate and high myopia and myopic astigmatism with the AMARIS excimer laser system (SCHWIND eye-tech-solutions GmbH, Kleinostheim, Germany). Methods This prospective longitudinal study evaluated 80 eyes of 40 subjects who underwent aspheric PRK. Manifest refractive spherical equivalent (MRSE) of up to −10.00 diopters (D) at the spectacle plane with cylinder up to 3.25 was treated. Refractive outcomes and corneal wavefront data (6 mm pupil to the 7th Zernike order) were evaluated out to 2 years postoperatively. Statistical significance was indicated by P < 0.05. Results The mean manifest spherical equivalent refraction (MRSE) was −4.77 ± 2.45 (range, −10.00 D to −0.75 D) preoperatively and −0.12 ± 0.35 D (range, −1.87 D to +0.75 D) postoperatively (P < 0.0001). Postoperatively, 91% (73/80) of eyes had an MRSE within ±0.50 D of the attempted. No eyes lost one or more lines of corrected distance visual acuity (CDVA) and CDVA increased by one or more lines in 26% (21/80) of eyes. Corneal trefoil and corneal higher order aberration root mean square did not statistically change postoperatively compared to preoperatively (P > 0.05, both cases). There was a statistical increase in postoperative coma (+0.12 μm) and spherical aberration (+0.14 μm) compared to preoperatively (P < 0.001, both cases). Conclusion Aspheric PRK provides excellent visual and refractive outcomes with induction in individual corneal aberrations but not overall corneal aberrations.
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
Design of order statistics filters using feedforward neural networks
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Bochkarev, V. V.
2016-08-01
In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.
Δim-lacunary statistical convergence of order α
NASA Astrophysics Data System (ADS)
Altınok, Hıfsı; Et, Mikail; Işık, Mahmut
2018-01-01
The purpose of this work is to introduce the concepts of Δim-lacunary statistical convergence of order α and lacunary strongly (Δim,p )-convergence of order α. We establish some connections between lacunary strongly (Δim,p )-convergence of order α and Δim-lacunary statistical convergence of order α. It is shown that if a sequence is lacunary strongly (Δim,p )-summable of order α then it is Δim-lacunary statistically convergent of order α.
Li, Yaohang; Liu, Hui; Rata, Ionel; Jakobsson, Eric
2013-02-25
The rapidly increasing number of protein crystal structures available in the Protein Data Bank (PDB) has naturally made statistical analyses feasible in studying complex high-order inter-residue correlations. In this paper, we report a context-based secondary structure potential (CSSP) for assessing the quality of predicted protein secondary structures generated by various prediction servers. CSSP is a sequence-position-specific knowledge-based potential generated based on the potentials of mean force approach, where high-order inter-residue interactions are taken into consideration. The CSSP potential is effective in identifying secondary structure predictions with good quality. In 56% of the targets in the CB513 benchmark, the optimal CSSP potential is able to recognize the native secondary structure or a prediction with Q3 accuracy higher than 90% as best scored in the predicted secondary structures generated by 10 popularly used secondary structure prediction servers. In more than 80% of the CB513 targets, the predicted secondary structures with the lowest CSSP potential values yield higher than 80% Q3 accuracy. Similar performance of CSSP is found on the CASP9 targets as well. Moreover, our computational results also show that the CSSP potential using triplets outperforms the CSSP potential using doublets and is currently better than the CSSP potential using quartets.
Rough-pipe flows and the existence of fully developed turbulence
NASA Astrophysics Data System (ADS)
Gioia, G.; Chakraborty, Pinaki; Bombardelli, Fabián A.
2006-03-01
It is widely believed that at high Reynolds number (Re) all turbulent flows approach a limiting state of "fully developed turbulence" in which the statistics of the velocity fluctuations are independent of Re. Nevertheless, direct measurements of the velocity fluctuations have failed to yield firm empirical evidence that even the second-order structure function becomes independent of Re at high Re, let alone structure functions of higher order. Here we relate the friction coefficient (f) of rough-pipe flows to the second-order structure function. Then we show that in light of experimental measurements of f our results yield unequivocal evidence that the second-order structure function becomes independent of Re at high Re, compatible with the existence of fully developed turbulence.
Reconstructing Information in Large-Scale Structure via Logarithmic Mapping
NASA Astrophysics Data System (ADS)
Szapudi, Istvan
We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out practical methods, with the ultimate goal of cosmological parameter estimation. We will quantify with standard MCMC and Fisher methods (including DETF Figure of merit when applicable) the efficiency of our estimators, comparing with the conventional method, that uses the un-transformed field. Preliminary results indicate that the increase for NASA's WFIRST in the DETF Figure of Merit would be 1.5-4.2 using a range of pessimistic to optimistic assumptions, respectively.
ERIC Educational Resources Information Center
Burch, Gerald F.; Burch, Jana J.; Heller, Nathan A.; Batchelor, John H.
2015-01-01
Continuing pressures are being placed on undergraduate business education to alter curriculum content and delivery. The anticipated product of these changes is a graduate that is capable of performing the higher order thinking skills needed to navigate a constantly changing, global business environment. This article describes the implementation of…
Nonlinear identification of the total baroreflex arc: higher-order nonlinearity
Moslehpour, Mohsen; Kawada, Toru; Sunagawa, Kenji; Sugimachi, Masaru
2016-01-01
The total baroreflex arc is the open-loop system relating carotid sinus pressure (CSP) to arterial pressure (AP). The nonlinear dynamics of this system were recently characterized. First, Gaussian white noise CSP stimulation was employed in open-loop conditions in normotensive and hypertensive rats with sectioned vagal and aortic depressor nerves. Nonparametric system identification was then applied to measured CSP and AP to establish a second-order nonlinear Uryson model. The aim in this study was to assess the importance of higher-order nonlinear dynamics via development and evaluation of a third-order nonlinear model of the total arc using the same experimental data. Third-order Volterra and Uryson models were developed by employing nonparametric and parametric identification methods. The R2 values between the AP predicted by the best third-order Volterra model and measured AP in response to Gaussian white noise CSP not utilized in developing the model were 0.69 ± 0.03 and 0.70 ± 0.03 for normotensive and hypertensive rats, respectively. The analogous R2 values for the best third-order Uryson model were 0.71 ± 0.03 and 0.73 ± 0.03. These R2 values were not statistically different from the corresponding values for the previously established second-order Uryson model, which were both 0.71 ± 0.03 (P > 0.1). Furthermore, none of the third-order models predicted well-known nonlinear behaviors including thresholding and saturation better than the second-order Uryson model. Additional experiments suggested that the unexplained AP variance was partly due to higher brain center activity. In conclusion, the second-order Uryson model sufficed to represent the sympathetically mediated total arc under the employed experimental conditions. PMID:27629885
Wavelet Transform Based Higher Order Statistical Analysis of Wind and Wave Time Histories
NASA Astrophysics Data System (ADS)
Habib Huseni, Gulamhusenwala; Balaji, Ramakrishnan
2017-10-01
Wind, blowing on the surface of the ocean, imparts the energy to generate the waves. Understanding the wind-wave interactions is essential for an oceanographer. This study involves higher order spectral analyses of wind speeds and significant wave height time histories, extracted from European Centre for Medium-Range Weather Forecast database at an offshore location off Mumbai coast, through continuous wavelet transform. The time histories were divided by the seasons; pre-monsoon, monsoon, post-monsoon and winter and the analysis were carried out to the individual data sets, to assess the effect of various seasons on the wind-wave interactions. The analysis revealed that the frequency coupling of wind speeds and wave heights of various seasons. The details of data, analysing technique and results are presented in this paper.
Statistics of spatial derivatives of velocity and pressure in turbulent channel flow
NASA Astrophysics Data System (ADS)
Vreman, A. W.; Kuerten, J. G. M.
2014-08-01
Statistical profiles of the first- and second-order spatial derivatives of velocity and pressure are reported for turbulent channel flow at Reτ = 590. The statistics were extracted from a high-resolution direct numerical simulation. To quantify the anisotropic behavior of fine-scale structures, the variances of the derivatives are compared with the theoretical values for isotropic turbulence. It is shown that appropriate combinations of first- and second-order velocity derivatives lead to (directional) viscous length scales without explicit occurrence of the viscosity in the definitions. To quantify the non-Gaussian and intermittent behavior of fine-scale structures, higher-order moments and probability density functions of spatial derivatives are reported. Absolute skewnesses and flatnesses of several spatial derivatives display high peaks in the near wall region. In the logarithmic and central regions of the channel flow, all first-order derivatives appear to be significantly more intermittent than in isotropic turbulence at the same Taylor Reynolds number. Since the nine variances of first-order velocity derivatives are the distinct elements of the turbulence dissipation, the budgets of these nine variances are shown, together with the budget of the turbulence dissipation. The comparison of the budgets in the near-wall region indicates that the normal derivative of the fluctuating streamwise velocity (∂u'/∂y) plays a more important role than other components of the fluctuating velocity gradient. The small-scale generation term formed by triple correlations of fluctuations of first-order velocity derivatives is analyzed. A typical mechanism of small-scale generation near the wall (around y+ = 1), the intensification of positive ∂u'/∂y by local strain fluctuation (compression in normal and stretching in spanwise direction), is illustrated and discussed.
Padmanabhan, Prema; Mrochen, Michael; Basuthkar, Subam; Viswanathan, Deepa; Joseph, Roy
2008-03-01
To compare the outcomes of wavefront-guided and wavefront-optimized treatment in fellow eyes of patients having laser in situ keratomileusis (LASIK) for myopia. Medical and Vision Research Foundation, Tamil Nadu, India. This prospective comparative study comprised 27 patients who had wavefront-guided LASIK in 1 eye and wavefront-optimized LASIK in the fellow eye. The Hansatome (Bausch & Lomb) was used to create a superior-hinged flap and the Allegretto laser (WaveLight Laser Technologie AG), for photoablation. The Allegretto wave analyzer was used to measure ocular wavefront aberrations and the Functional Acuity Contrast Test chart, to measure contrast sensitivity before and 1 month after LASIK. The refractive and visual outcomes and the changes in aberrations and contrast sensitivity were compared between the 2 treatment modalities. One month postoperatively, 92% of eyes in the wavefront-guided group and 85% in the wavefront-optimized group had uncorrected visual acuity of 20/20 or better; 93% and 89%, respectively, had a postoperative spherical equivalent refraction of +/-0.50 diopter. The differences between groups were not statistically significant. Wavefront-guided LASIK induced less change in 18 of 22 higher-order Zernike terms than wavefront-optimized LASIK, with the change in positive spherical aberration the only statistically significant one (P= .01). Contrast sensitivity improved at the low and middle spatial frequencies (not statistically significant) and worsened significantly at high spatial frequencies after wavefront-guided LASIK; there was a statistically significant worsening at all spatial frequencies after wavefront-optimized LASIK. Although both wavefront-guided and wavefront-optimized LASIK gave excellent refractive correction results, the former induced less higher-order aberrations and was associated with better contrast sensitivity.
Barbosa, Daniel C; Roupar, Dalila B; Ramos, Jaime C; Tavares, Adriano C; Lima, Carlos S
2012-01-11
Wireless capsule endoscopy has been introduced as an innovative, non-invasive diagnostic technique for evaluation of the gastrointestinal tract, reaching places where conventional endoscopy is unable to. However, the output of this technique is an 8 hours video, whose analysis by the expert physician is very time consuming. Thus, a computer assisted diagnosis tool to help the physicians to evaluate CE exams faster and more accurately is an important technical challenge and an excellent economical opportunity. The set of features proposed in this paper to code textural information is based on statistical modeling of second order textural measures extracted from co-occurrence matrices. To cope with both joint and marginal non-Gaussianity of second order textural measures, higher order moments are used. These statistical moments are taken from the two-dimensional color-scale feature space, where two different scales are considered. Second and higher order moments of textural measures are computed from the co-occurrence matrices computed from images synthesized by the inverse wavelet transform of the wavelet transform containing only the selected scales for the three color channels. The dimensionality of the data is reduced by using Principal Component Analysis. The proposed textural features are then used as the input of a classifier based on artificial neural networks. Classification performances of 93.1% specificity and 93.9% sensitivity are achieved on real data. These promising results open the path towards a deeper study regarding the applicability of this algorithm in computer aided diagnosis systems to assist physicians in their clinical practice.
Science Journals in the Garden: Developing the Skill of Observation in Elementary Age Students
NASA Astrophysics Data System (ADS)
Kelly, Karinsa Michelle
The ability to make and record scientific observations is critical in order for students to engage in successful inquiry, and provides a sturdy foundation for children to develop higher order cognitive processes. Nevertheless, observation is taken for granted in the elementary classroom. This study explores how linking school garden experience with the use of science journals can support this skill. Students participated in a month-long unit in which they practiced their observation skills in the garden and recorded those observations in a science journal. Students' observational skills were assessed using pre- and post-assessments, student journals, and student interviews using three criteria: Accuracy, Detail, and Quantitative Data. Statistically significant improvements were found in the categories of Detail and Quantitative Data. Scores did improve in the category of Accuracy, but it was not found to be a statistically significant improvement.
First principles statistical mechanics of alloys and magnetism
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai
Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.
NASA Astrophysics Data System (ADS)
Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.
2017-12-01
Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.
Image statistics underlying natural texture selectivity of neurons in macaque V4
Okazawa, Gouki; Tajima, Satohiro; Komatsu, Hidehiko
2015-01-01
Our daily visual experiences are inevitably linked to recognizing the rich variety of textures. However, how the brain encodes and differentiates a plethora of natural textures remains poorly understood. Here, we show that many neurons in macaque V4 selectively encode sparse combinations of higher-order image statistics to represent natural textures. We systematically explored neural selectivity in a high-dimensional texture space by combining texture synthesis and efficient-sampling techniques. This yielded parameterized models for individual texture-selective neurons. The models provided parsimonious but powerful predictors for each neuron’s preferred textures using a sparse combination of image statistics. As a whole population, the neuronal tuning was distributed in a way suitable for categorizing textures and quantitatively predicts human ability to discriminate textures. Together, we suggest that the collective representation of visual image statistics in V4 plays a key role in organizing the natural texture perception. PMID:25535362
Realistic finite temperature simulations of magnetic systems using quantum statistics
NASA Astrophysics Data System (ADS)
Bergqvist, Lars; Bergman, Anders
2018-01-01
We have performed realistic atomistic simulations at finite temperatures using Monte Carlo and atomistic spin dynamics simulations incorporating quantum (Bose-Einstein) statistics. The description is much improved at low temperatures compared to classical (Boltzmann) statistics normally used in these kind of simulations, while at higher temperatures the classical statistics are recovered. This corrected low-temperature description is reflected in both magnetization and the magnetic specific heat, the latter allowing for improved modeling of the magnetic contribution to free energies. A central property in the method is the magnon density of states at finite temperatures, and we have compared several different implementations for obtaining it. The method has no restrictions regarding chemical and magnetic order of the considered materials. This is demonstrated by applying the method to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co random alloys and the ferrimagnetic system GdFe3.
Comparison of the visual results after SMILE and femtosecond laser-assisted LASIK for myopia.
Lin, Fangyu; Xu, Yesheng; Yang, Yabo
2014-04-01
To perform a comparative clinical analysis of the safety, efficacy, and predictability of two surgical procedures (ie, small incision lenticule extraction [SMILE] and femtosecond laser-assisted LASIK [FS-LASIK]) to correct myopia. Sixty eyes of 31 patients with a mean spherical equivalent of -5.13 ± 1.75 diopters underwent myopia correction with the SMILE procedure. Fifty-one eyes of 27 patients with a mean spherical equivalent of -5.58 ± 2.41 diopters were treated with the FS-LASIK procedure. Postoperative uncorrected and corrected distance visual acuity, manifest refraction, and higher-order aberrations were analyzed statistically at 1 and 3 months postoperatively. No statistically significant differences were found at 1 and 3 months in parameters that included the percentage of eyes with an uncorrected distance visual acuity of 20/20 or better (P = .556, .920) and mean spherical equivalent refraction (P = .055, .335). At 1 month, 4 SMILE-treated eyes and 1 FS-LASIK-treated eye lost one or more line of visual acuity (P = .214, chi-square test). At 3 months, 2 SMILE-treated eyes lost one or more line of visual acuity, whereas all FS-LASIK-treated eyes had an unchanged or corrected distance visual acuity. Higher-order aberrations and spherical aberration were significantly lower in the SMILE group than the FS-LASIK group at 1 (P = .007, .000) and 3 (P = .006, .000) months of follow-up. SMILE and FS-LASIK are safe, effective, and predictable surgical procedures to treat myopia. SMILE has a lower induction rate of higher-order aberrations and spherical aberration than the FS-LASIK procedure. Copyright 2014, SLACK Incorporated.
From Biophysics to Evolutionary Genetics: Statistical Aspects of Gene Regulation
NASA Astrophysics Data System (ADS)
Lässig, Michael
Genomic functions often cannot be understood at the level of single genes but require the study of gene networks. This systems biology credo is nearly commonplace by now. Evidence comes from the comparative analysis of entire genomes: current estimates put, for example, the number of human genes at around 22,000, hardly more than the 14,000 of the fruit fly, and not even an order of magnitude higher than the 6,000 of baker's yeast. The complexity and diversity of higher animals, therefore, cannot be explained in terms of their gene numbers. If, however, a biological function requires the concerted action of several genes, and conversely, a gene takes part in several functional contexts, an organism may be defined less by its individual genes but by their interactions. The emerging picture of the genome as a strongly interacting system with many degrees of freedom brings new challenges for experiment and theory, many of which are of a statistical nature. And indeed, this picture continues to make the subject attractive to a growing number of statistical physicists.
Restoration of MRI Data for Field Nonuniformities using High Order Neighborhood Statistics
Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert
2007-01-01
MRI at high magnetic fields (> 3.0 T ) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to nonuniformity of image intensity, greatly complicating further analysis such as registration and segmentation. Existing methods for bias field correction are effective for 1.5 T or 3.0 T MRI, but are not completely satisfactory for higher field data. This paper develops an effective bias field correction for high field MRI based on the assumption that the nonuniformity is smoothly varying in space. Also, nonuniformity is quantified and unmixed using high order neighborhood statistics of intensity cooccurrences. They are computed within spherical windows of limited size over the entire image. The restoration is iterative and makes use of a novel stable stopping criterion that depends on the scaled entropy of the cooccurrence statistics, which is a non monotonic function of the iterations; the Shannon entropy of the cooccurrence statistics normalized to the effective dynamic range of the image. The algorithm restores whole head data, is robust to intense nonuniformities present in high field acquisitions, and is robust to variations in anatomy. This algorithm significantly improves bias field correction in comparison to N3 on phantom 1.5 T head data and high field 4 T human head data. PMID:18193095
Student Solution Manual for Essential Mathematical Methods for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-02-01
1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics.
Essential Mathematical Methods for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-02-01
1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics; Appendices; Index.
Statistical analysis of excitation energies in actinide and rare-earth nuclei
NASA Astrophysics Data System (ADS)
Levon, A. I.; Magner, A. G.; Radionov, S. V.
2018-04-01
Statistical analysis of distributions of the collective states in actinide and rare-earth nuclei is performed in terms of the nearest-neighbor spacing distribution (NNSD). Several approximations, such as the linear approach to the level repulsion density and that suggested by Brody to the NNSDs were applied for the analysis. We found an intermediate character of the experimental spectra between the order and the chaos for a number of rare-earth and actinide nuclei. The spectra are closer to the Wigner distribution for energies limited by 3 MeV, and to the Poisson distribution for data including higher excitation energies and higher spins. The latter result is in agreement with the theoretical calculations. These features are confirmed by the cumulative distributions, where the Wigner contribution dominates at smaller spacings while the Poisson one is more important at larger spacings, and our linear approach improves the comparison with experimental data at all desired spacings.
Statistical performance evaluation of ECG transmission using wireless networks.
Shakhatreh, Walid; Gharaibeh, Khaled; Al-Zaben, Awad
2013-07-01
This paper presents simulation of the transmission of biomedical signals (using ECG signal as an example) over wireless networks. Investigation of the effect of channel impairments including SNR, pathloss exponent, path delay and network impairments such as packet loss probability; on the diagnosability of the received ECG signal are presented. The ECG signal is transmitted through a wireless network system composed of two communication protocols; an 802.15.4- ZigBee protocol and an 802.11b protocol. The performance of the transmission is evaluated using higher order statistics parameters such as kurtosis and Negative Entropy in addition to the common techniques such as the PRD, RMS and Cross Correlation.
A second order thermodynamic perturbation theory for hydrogen bond cooperativity in water
NASA Astrophysics Data System (ADS)
Marshall, Bennett D.
2017-05-01
It has been extensively demonstrated through first principles quantum mechanics calculations that water exhibits strong hydrogen bond cooperativity. Equations of state developed from statistical mechanics typically assume pairwise additivity, meaning they cannot account for these 3-body and higher cooperative effects. In this paper, we extend a second order thermodynamic perturbation theory to correct for hydrogen bond cooperativity in 4 site water. We demonstrate that the theory predicts hydrogen bonding structure consistent spectroscopy, neutron diffraction, and molecular simulation data. Finally, we implement the approach into a general equation of state for water.
Algorithm for computing descriptive statistics for very large data sets and the exa-scale era
NASA Astrophysics Data System (ADS)
Beekman, Izaak
2017-11-01
An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.
High-Level Prediction Signals in a Low-Level Area of the Macaque Face-Processing Hierarchy.
Schwiedrzik, Caspar M; Freiwald, Winrich A
2017-09-27
Theories like predictive coding propose that lower-order brain areas compare their inputs to predictions derived from higher-order representations and signal their deviation as a prediction error. Here, we investigate whether the macaque face-processing system, a three-level hierarchy in the ventral stream, employs such a coding strategy. We show that after statistical learning of specific face sequences, the lower-level face area ML computes the deviation of actual from predicted stimuli. But these signals do not reflect the tuning characteristic of ML. Rather, they exhibit identity specificity and view invariance, the tuning properties of higher-level face areas AL and AM. Thus, learning appears to endow lower-level areas with the capability to test predictions at a higher level of abstraction than what is afforded by the feedforward sweep. These results provide evidence for computational architectures like predictive coding and suggest a new quality of functional organization of information-processing hierarchies beyond pure feedforward schemes. Copyright © 2017 Elsevier Inc. All rights reserved.
Higher order aberrations and relative risk of symptoms after LASIK.
Sharma, Munish; Wachler, Brian S Boxer; Chan, Colin C K
2007-03-01
To understand what level of higher order aberrations increases the relative risk of visual symptoms in patients after myopic LASIK. This study was a retrospective comparative analysis of 103 eyes of 62 patients divided in two groups, matched for age, gender, pupil size, and spherical equivalent refraction. The symptomatic group comprised 36 eyes of 24 patients after conventional LASIK with different laser systems evaluated in our referral clinic and the asymptomatic control group consisted of 67 eyes of 38 patients following LADARVision CustomCornea wavefront LASIK. Comparative analysis was performed for uncorrected visual acuity (UCVA), best spectacle-corrected visual acuity (BSCVA), contrast sensitivity, refractive cylinder, and higher order aberrations. Wavefront analysis was performed with the LADARWave aberrometer at 6.5-mm analysis for all eyes. Blurring of vision was the most common symptom (41.6%) followed by double image (19.4%), halo (16.7%), and fluctuation in vision (13.9%) in symptomatic patients. A statistically significant difference was noted in UCVA (P = .001), BSCVA (P = .001), contrast sensitivity (P < .001), and manifest cylinder (P = .001) in the two groups. The percentage difference between the symptomatic and control group mean root-mean-square (RMS) values ranged from 157% to 206% or 1.57 to 2.06 times greater. Patients with visual symptoms after LASIK have significantly lower visual acuity and contrast sensitivity and higher mean RMS values for higher order aberrations than patients without symptoms. Root-mean-square values of greater than two times the normal after-LASIK population for any given laser platform may increase the relative risk of symptoms.
Parametric Study of Decay of Homogeneous Isotropic Turbulence Using Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Rumsey, Christopher L.; Rubinstein, Robert; Balakumar, Ponnampalam; Zang, Thomas A.
2012-01-01
Numerical simulations of decaying homogeneous isotropic turbulence are performed with both low-order and high-order spatial discretization schemes. The turbulent Mach and Reynolds numbers for the simulations are 0.2 and 250, respectively. For the low-order schemes we use either second-order central or third-order upwind biased differencing. For higher order approximations we apply weighted essentially non-oscillatory (WENO) schemes, both with linear and nonlinear weights. There are two objectives in this preliminary effort to investigate possible schemes for large eddy simulation (LES). One is to explore the capability of a widely used low-order computational fluid dynamics (CFD) code to perform LES computations. The other is to determine the effect of higher order accuracy (fifth, seventh, and ninth order) achieved with high-order upwind biased WENO-based schemes. Turbulence statistics, such as kinetic energy, dissipation, and skewness, along with the energy spectra from simulations of the decaying turbulence problem are used to assess and compare the various numerical schemes. In addition, results from the best performing schemes are compared with those from a spectral scheme. The effects of grid density, ranging from 32 cubed to 192 cubed, on the computations are also examined. The fifth-order WENO-based scheme is found to be too dissipative, especially on the coarser grids. However, with the seventh-order and ninth-order WENO-based schemes we observe a significant improvement in accuracy relative to the lower order LES schemes, as revealed by the computed peak in the energy dissipation and by the energy spectrum.
Is the 'superhot' hard X-ray component in solar flares consistent with a thermal source?
NASA Technical Reports Server (NTRS)
Emslie, A. Gordon; Coffey, Victoria Newman; Schwartz, Richard A.
1989-01-01
It has been shown by Brown and Emslie (1988) that any optically thin thermal bremsstrahlung source must emit an energy spectrum L(epsilon)(keV/s per keV) which has the property that higher derivatives alternate in sign. In this short note, this test is applied to the 'superhot' component discussed by Lin et al. (1981) in order to determine whether a strictly thermal interpretation of this component is valid. All statistically significant higher derivatives do indeed have the correct sign; this strengthens the identification of this component as due to a thermal source.
Sinharay, Sandip; Jensen, Jens Ledet
2018-06-27
In educational and psychological measurement, researchers and/or practitioners are often interested in examining whether the ability of an examinee is the same over two sets of items. Such problems can arise in measurement of change, detection of cheating on unproctored tests, erasure analysis, detection of item preknowledge, etc. Traditional frequentist approaches that are used in such problems include the Wald test, the likelihood ratio test, and the score test (e.g., Fischer, Appl Psychol Meas 27:3-26, 2003; Finkelman, Weiss, & Kim-Kang, Appl Psychol Meas 34:238-254, 2010; Glas & Dagohoy, Psychometrika 72:159-180, 2007; Guo & Drasgow, Int J Sel Assess 18:351-364, 2010; Klauer & Rettig, Br J Math Stat Psychol 43:193-206, 1990; Sinharay, J Educ Behav Stat 42:46-68, 2017). This paper shows that approaches based on higher-order asymptotics (e.g., Barndorff-Nielsen & Cox, Inference and asymptotics. Springer, London, 1994; Ghosh, Higher order asymptotics. Institute of Mathematical Statistics, Hayward, 1994) can also be used to test for the equality of the examinee ability over two sets of items. The modified signed likelihood ratio test (e.g., Barndorff-Nielsen, Biometrika 73:307-322, 1986) and the Lugannani-Rice approximation (Lugannani & Rice, Adv Appl Prob 12:475-490, 1980), both of which are based on higher-order asymptotics, are shown to provide some improvement over the traditional frequentist approaches in three simulations. Two real data examples are also provided.
Corneal Aberrations in Former Preterm Infants: Results From The Wiesbaden Prematurity Study.
Fieß, Achim; Schuster, Alexander K; Kölb-Keerl, Ruth; Knuf, Markus; Kirchhof, Bernd; Muether, Philipp S; Bauer, Jacqueline
2017-12-01
To compare corneal aberrations in former preterm infants to that of full-term infants. A prospective cross-sectional study was carried out measuring the corneal shape with Scheimpflug imaging in former preterm infants of gestational age (GA) ≤32 weeks and full-term infants with GA ≥37 weeks now being aged between 4 to 10 years. The main outcome measures were corneal aberrations including astigmatism (Zernike: Z2-2; Z22), coma (Z3-1; Z31), trefoil (Z3-3; Z33), spherical aberration (Z40) and root-mean square of higher-order aberrations (RMS HOA). Multivariable analysis was performed to assess independent associations of gestational age groups and of retinopathy of prematurity (ROP) occurrence with corneal aberrations adjusting for sex and age at examination. A total of 259 former full-term and 226 preterm infants with a mean age of 7.2 ± 2.0 years were included in this study. Statistical analysis revealed an association of extreme prematurity (GA ≤28 weeks) with higher-order and lower-order aberrations of the total cornea. Vertical coma was higher in extreme prematurity (P < 0.001), due to the shape of the anterior corneal surface, while there was no association with trefoil and spherical aberration. ROP was not associated with higher-order aberrations when adjusted for gestational age group. This study demonstrated that specific corneal aberrations were associated with extreme prematurity rather than with ROP occurrence.
Ocular higher-order aberrations in a school children population.
Papamastorakis, George; Panagopoulou, Sophia; Tsilimbaris, Militadis K; Pallikaris, Ioannis G; Plainis, Sotiris
2015-01-01
The primary objective of the study was to explore the statistics of ocular higher-order aberrations in a population of primary and secondary school children. A sample of 557 children aged 10-15 years were selected from two primary and two secondary schools in Heraklion, Greece. Children were classified by age in three subgroups: group I (10.7±0.5 years), group II (12.4±0.5 years) and group III (14.5±0.5 years). Ocular aberrations were measured using a wavefront aberrometer (COAS, AMO Wavefront Sciences, USA) at mesopic light levels (illuminance at cornea was 4lux). Wavefront analysis was achieved for a 5mm pupil. Statistical analysis was carried out for the right eye only. The average coefficient of most high-order aberrations did not differ from zero with the exception of vertical (0.076μm) and horizontal (0.018μm) coma, oblique trefoil (-0.055μm) and spherical aberration (0.018μm). The most prominent change between the three groups was observed for the spherical aberration, which increased from 0.007μm (SE 0.005) in group I to 0.011μm (SE 0.004) in group II and 0.030μm (SE 0.004) in group III. Significant differences were also found for the oblique astigmatism and the third-order coma aberrations. Differences in the low levels of ocular spherical aberration in young children possibly reflect differences in lenticular spherical aberration and relate to the gradient refractive index of the lens. The evaluation of spherical aberration at certain stages of eye growth may help to better understand the underlying mechanisms of myopia development. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Ocular higher-order aberrations in a school children population
Papamastorakis, George; Panagopoulou, Sophia; Tsilimbaris, Militadis K.; Pallikaris, Ioannis G.; Plainis, Sotiris
2014-01-01
Purpose The primary objective of the study was to explore the statistics of ocular higher-order aberrations in a population of primary and secondary school children. Methods A sample of 557 children aged 10–15 years were selected from two primary and two secondary schools in Heraklion, Greece. Children were classified by age in three subgroups: group I (10.7 ± 0.5 years), group II (12.4 ± 0.5 years) and group III (14.5 ± 0.5 years). Ocular aberrations were measured using a wavefront aberrometer (COAS, AMO Wavefront Sciences, USA) at mesopic light levels (illuminance at cornea was 4 lux). Wavefront analysis was achieved for a 5 mm pupil. Statistical analysis was carried out for the right eye only. Results The average coefficient of most high-order aberrations did not differ from zero with the exception of vertical (0.076 μm) and horizontal (0.018 μm) coma, oblique trefoil (−0.055 μm) and spherical aberration (0.018 μm). The most prominent change between the three groups was observed for the spherical aberration, which increased from 0.007 μm (SE 0.005) in group I to 0.011 μm (SE 0.004) in group II and 0.030 μm (SE 0.004) in group III. Significant differences were also found for the oblique astigmatism and the third-order coma aberrations. Conclusions Differences in the low levels of ocular spherical aberration in young children possibly reflect differences in lenticular spherical aberration and relate to the gradient refractive index of the lens. The evaluation of spherical aberration at certain stages of eye growth may help to better understand the underlying mechanisms of myopia development. PMID:25288226
Sample Reuse in Statistical Remodeling.
1987-08-01
as the jackknife and bootstrap, is an expansion of the functional, T(Fn), or of its distribution function or both. Frangos and Schucany (1987a) used...accelerated bootstrap. In the same report Frangos and Schucany demonstrated the small sample superiority of that approach over the proposals that take...higher order terms of an Edgeworth expansion into account. In a second report Frangos and Schucany (1987b) examined the small sample performance of
Prototyping with Data Dictionaries for Requirements Analysis.
1985-03-01
statistical packages and software for screen layout. These items work at a higher level than another category of prototyping tool, program generators... Program generators are software packages which, when given specifications, produce source listings, usually in a high order language such as COBCL...with users and this will not happen if he must stop to develcp a detailed program . [Ref. 241] Hardware as well as software should be considered in
Hierarchy of N-point functions in the ΛCDM and ReBEL cosmologies
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.; Juszkiewicz, Roman; van de Weygaert, Rien
2010-11-01
In this work we investigate higher-order statistics for the ΛCDM and ReBEL scalar-interacting dark matter models by analyzing 180h-1Mpc dark matter N-body simulation ensembles. The N-point correlation functions and the related hierarchical amplitudes, such as skewness and kurtosis, are computed using the counts-in-cells method. Our studies demonstrate that the hierarchical amplitudes Sn of the scalar-interacting dark matter model significantly deviate from the values in the ΛCDM cosmology on scales comparable and smaller than the screening length rs of a given scalar-interacting model. The corresponding additional forces that enhance the total attractive force exerted on dark matter particles at galaxy scales lower the values of the hierarchical amplitudes Sn. We conclude that hypothetical additional exotic interactions in the dark matter sector should leave detectable markers in the higher-order correlation statistics of the density field. We focused in detail on the redshift evolution of the dark matter field’s skewness and kurtosis. From this investigation we find that the deviations from the canonical ΛCDM model introduced by the presence of the “fifth” force attain a maximum value at redshifts 0.5
Advanced statistical energy analysis
NASA Astrophysics Data System (ADS)
Heron, K. H.
1994-09-01
A high-frequency theory (advanced statistical energy analysis (ASEA)) is developed which takes account of the mechanism of tunnelling and uses a ray theory approach to track the power flowing around a plate or a beam network and then uses statistical energy analysis (SEA) to take care of any residual power. ASEA divides the energy of each sub-system into energy that is freely available for transfer to other sub-systems and energy that is fixed within the sub-systems that are physically separate and can be interpreted as a series of mathematical models, the first of which is identical to standard SEA and subsequent higher order models are convergent on an accurate prediction. Using a structural assembly of six rods as an example, ASEA is shown to converge onto the exact results while SEA is shown to overpredict by up to 60 dB.
Frame synchronization methods based on channel symbol measurements
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.
1989-01-01
The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.
Simultaneous ocular and muscle artifact removal from EEG data by exploiting diverse statistics.
Chen, Xun; Liu, Aiping; Chen, Qiang; Liu, Yu; Zou, Liang; McKeown, Martin J
2017-09-01
Electroencephalography (EEG) recordings are frequently contaminated by both ocular and muscle artifacts. These are normally dealt with separately, by employing blind source separation (BSS) techniques relying on either second-order or higher-order statistics (SOS & HOS respectively). When HOS-based methods are used, it is usually in the setting of assuming artifacts are statistically independent to the EEG. When SOS-based methods are used, it is assumed that artifacts have autocorrelation characteristics distinct from the EEG. In reality, ocular and muscle artifacts do not completely follow the assumptions of strict temporal independence to the EEG nor completely unique autocorrelation characteristics, suggesting that exploiting HOS or SOS alone may be insufficient to remove these artifacts. Here we employ a novel BSS technique, independent vector analysis (IVA), to jointly employ HOS and SOS simultaneously to remove ocular and muscle artifacts. Numerical simulations and application to real EEG recordings were used to explore the utility of the IVA approach. IVA was superior in isolating both ocular and muscle artifacts, especially for raw EEG data with low signal-to-noise ratio, and also integrated usually separate SOS and HOS steps into a single unified step. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analysis of wheezes using wavelet higher order spectral features.
Taplidou, Styliani A; Hadjileontiadis, Leontios J
2010-07-01
Wheezes are musical breath sounds, which usually imply an existing pulmonary obstruction, such as asthma and chronic obstructive pulmonary disease (COPD). Although many studies have addressed the problem of wheeze detection, a limited number of scientific works has focused in the analysis of wheeze characteristics, and in particular, their time-varying nonlinear characteristics. In this study, an effort is made to reveal and statistically analyze the nonlinear characteristics of wheezes and their evolution over time, as they are reflected in the quadratic phase coupling of their harmonics. To this end, the continuous wavelet transform (CWT) is used in combination with third-order spectra to define the analysis domain, where the nonlinear interactions of the harmonics of wheezes and their time variations are revealed by incorporating instantaneous wavelet bispectrum and bicoherence, which provide with the instantaneous biamplitude and biphase curves. Based on this nonlinear information pool, a set of 23 features is proposed for the nonlinear analysis of wheezes. Two complementary perspectives, i.e., general and detailed, related to average performance and to localities, respectively, were used in the construction of the feature set, in order to embed trends and local behaviors, respectively, seen in the nonlinear interaction of the harmonic elements of wheezes over time. The proposed feature set was evaluated on a dataset of wheezes, acquired from adult patients with diagnosed asthma and COPD from a lung sound database. The statistical evaluation of the feature set revealed discrimination ability between the two pathologies for all data subgroupings. In particular, when the total breathing cycle was examined, all 23 features, but one, showed statistically significant difference between the COPD and asthma pathologies, whereas for the subgroupings of inspiratory and expiratory phases, 18 out of 23 and 22 out of 23 features exhibited discrimination power, respectively. This paves the way for the use of the wavelet higher order spectral features as an input vector to an efficient classifier. Apparently, this would integrate the intrinsic characteristics of wheezes within computerized diagnostic tools toward their more efficient evaluation.
Morphological representation of order-statistics filters.
Charif-Chefchaouni, M; Schonfeld, D
1995-01-01
We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.
Liu, Fang-fang; Zhai, Zhen-guo; Yang, Yuan-hua; Wang, Jun; Wang, Chen
2013-06-25
To evaluate the dynamic changes of inflammation-related indices in blood during the development of venous thromboembolism (VTE) and the association between these indices and VTE. A total of 95 VTE hospitalized patients(41 males,54 females) were recruited from Department of Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital from January 2010 to December 2010. Comparisons of inflammation-related indices including white blood cell (WBC), neutrophil (NE), fibrinogen (FBG), C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) were conducted between VTE patients and normal ranges. And the dynamic changes of these indices during the development of VTE were evaluated. Then they were divided into subgroups according to disease stage, gender, age, VTE type, body mass index, smoking status and clinical manifestations. And statistical analyses were performed to elucidate the associations between these indices and VTE. The levels of NE and CRP in VTE patients (0.72, 15.0 mg/L) and ESR in male VTE patients (20.0 mm/1 h) were elevated compared with normal ranges; while WBC (male 7.27×10(9)/L, female 8.67×10(9)/L), FBG (male 3621 mg/L, female 3201 mg/L) and female ESR (19.5 mm/1 h) in VTE patients were within the normal ranges. The level of CRP was higher in acute (mean rank order value: 49.72) and sub-acute (mean rank order value: 44.80) VTE patients than chronic VTE patients (mean rank order value: 30.25). The level of FBG, CRP and ESR in patients ≥ 50 years old increased versus those <50 years old (mean rank order values 48.83 vs 34.53, 44.32 vs 28.90 and 45.95 vs 27.84 respectively), the patients whose body mass index (BMI) <25 kg/m(2) had higher WBC level than those whose BMI ≥ 25 kg/m(2) (mean rank order values 52.96 vs 36.46); smoking VTE patients had elevated FBG and CRP levels than non-smoking VTE patients (mean rank order values 57.75 vs 42.69 and 53.92 vs 37.75 respectively); compared with those without clinical manifestations of periphery pulmonary artery involved, the patients with clinical manifestations had higher levels of FBG, CRP and ESR (mean rank order values 59.24 vs 37.39, 52.68 vs 33.19 and 50.08 vs 36.55 respectively). The above differences had statistical significance (all P < 0.05). Some inflammation-related indices frequently used in clinical settings become elevated in VTE patients. Part of these indices show higher levels in VTE acute and sub-acute stages, and in older, non-obese, smoking and periphery pulmonary artery involved VTE patients.
Statistics of voids in hierarchical universes
NASA Technical Reports Server (NTRS)
Fry, J. N.
1986-01-01
As one alternative to the N-point galaxy correlation function statistics, the distribution of holes or the probability that a volume of given size and shape be empty of galaxies can be considered. The probability of voids resulting from a variety of hierarchical patterns of clustering is considered, and these are compared with the results of numerical simulations and with observations. A scaling relation required by the hierarchical pattern of higher order correlation functions is seen to be obeyed in the simulations, and the numerical results show a clear difference between neutrino models and cold-particle models; voids are more likely in neutrino universes. Observational data do not yet distinguish but are close to being able to distinguish between models.
Wave propagation in a random medium
NASA Technical Reports Server (NTRS)
Lee, R. W.; Harp, J. C.
1969-01-01
A simple technique is used to derive statistical characterizations of the perturbations imposed upon a wave (plane, spherical or beamed) propagating through a random medium. The method is essentially physical rather than mathematical, and is probably equivalent to the Rytov method. The limitations of the method are discussed in some detail; in general they are restrictive only for optical paths longer than a few hundred meters, and for paths at the lower microwave frequencies. Situations treated include arbitrary path geometries, finite transmitting and receiving apertures, and anisotropic media. Results include, in addition to the usual statistical quantities, time-lagged functions, mixed functions involving amplitude and phase fluctuations, angle-of-arrival covariances, frequency covariances, and other higher-order quantities.
Persistence and breakdown of strand symmetry in the human genome.
Zhang, Shang-Hong
2015-04-07
Afreixo, V., Bastos, C.A.C., Garcia, S.P., Rodrigues, J.M.O.S., Pinho, A.J., Ferreira, P.J.S.G., 2013. The breakdown of the word symmetry in the human genome. J. Theor. Biol. 335, 153-159 analyzed the word symmetry (strand symmetry or the second parity rule) in the human genome. They concluded that strand symmetry holds for oligonucleotides up to 6 nt and is no longer statistically significant for oligonucleotides of higher orders. However, although they provided some new results for the issue, their interpretation would not be fully justified. Also, their conclusion needs to be further evaluated. Further analysis of their results, especially those of equivalence tests and word symmetry distance, shows that strand symmetry would persist for higher-order oligonucleotides up to 9 nt in the human genome, at least for its overall frequency framework (oligonucleotide frequency pattern). Copyright © 2015 Elsevier Ltd. All rights reserved.
Statistical Analysis of Factors Affecting Child Mortality in Pakistan.
Ahmed, Zoya; Kamal, Asifa; Kamal, Asma
2016-06-01
Child mortality is a composite indicator reflecting economic, social, environmental, healthcare services, and their delivery situation in a country. Globally, Pakistan has the third highest burden of fetal, maternal, and child mortality. Factors affecting child mortality in Pakistan are investigated by using Binary Logistic Regression Analysis. Region, education of mother, birth order, preceding birth interval (the period between the previous child birth and the index child birth), size of child at birth, and breastfeeding and family size were found to be significantly important with child mortality in Pakistan. Child mortality decreased as level of mother's education, preceding birth interval, size of child at birth, and family size increased. Child mortality was found to be significantly higher in Balochistan as compared to other regions. Child mortality was low for low birth orders. Child survival was significantly higher for children who were breastfed as compared to those who were not.
Amezcua, Carlos A; Szabo, Christina M
2013-06-01
In this work, we applied nuclear magnetic resonance (NMR) spectroscopy to rapidly assess higher order structure (HOS) comparability in protein samples. Using a variation of the NMR fingerprinting approach described by Panjwani et al. [2010. J Pharm Sci 99(8):3334-3342], three nonglycosylated proteins spanning a molecular weight range of 6.5-67 kDa were analyzed. A simple statistical method termed easy comparability of HOS by NMR (ECHOS-NMR) was developed. In this method, HOS similarity between two samples is measured via the correlation coefficient derived from linear regression analysis of binned NMR spectra. Applications of this method include HOS comparability assessment during new product development, manufacturing process changes, supplier changes, next-generation products, and the development of biosimilars to name just a few. We foresee ECHOS-NMR becoming a routine technique applied to comparability exercises used to complement data from other analytical techniques. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Donges, J. F.; Schleussner, C.-F.; Siegmund, J. F.; Donner, R. V.
2016-05-01
Studying event time series is a powerful approach for analyzing the dynamics of complex dynamical systems in many fields of science. In this paper, we describe the method of event coincidence analysis to provide a framework for quantifying the strength, directionality and time lag of statistical interrelationships between event series. Event coincidence analysis allows to formulate and test null hypotheses on the origin of the observed interrelationships including tests based on Poisson processes or, more generally, stochastic point processes with a prescribed inter-event time distribution and other higher-order properties. Applying the framework to country-level observational data yields evidence that flood events have acted as triggers of epidemic outbreaks globally since the 1950s. Facing projected future changes in the statistics of climatic extreme events, statistical techniques such as event coincidence analysis will be relevant for investigating the impacts of anthropogenic climate change on human societies and ecosystems worldwide.
NASA Astrophysics Data System (ADS)
Huang, C.; Launianen, S.; Gronholm, T.; Katul, G. G.
2013-12-01
Biological aerosol particles are now receiving significant attention given their role in air quality, climate change, and spreading of allergens and other communicable diseases. A major uncertainty in their quantification is associated with complex transport processes governing their generation and removal inside canopies. It has been known for some time now that the commonly used first-order closure to link mean concentration gradients with turbulent fluxes is problematic. The presence of a mean counter-gradient momentum transport in an open trunk space exemplifies such failure. Here, instead of employing K-theory, a size-resolved second-order multilayer model for dry particle deposition is proposed. The starting point of the proposed model is a particle flux budget in which the production, transport, and dissipation terms are modeled. Because these terms require higher-order velocity statistics, this flux budget is coupled with a conventional second-order closure scheme for the flow field within the canopy sub-layer. The failure of conventional K-theory for particle fluxes are explicitly linked to the onset of a mean counter or zero - gradient flow attributed to a significant particle flux transport term. The relative importance of these terms in the particle flux budget and their effects on the foliage particle collection terms for also discussed for each particle size. The proposed model is evaluated against published multi-level measurements of sized-resolved particle fluxes and mean concentration profiles collected within and above a tall Scots pine forest in Hyytiala, Southern Finland. The main findings are that (1) first-order closure schemes may be still plausible for modeling particle deposition velocity, especially in the particle size range smaller than 1 μm when the turbulent particle diffusivity is estimated from higher order flow statistics; (2) the mechanisms leading to the increased trend of particle deposition velocity with increasing friction velocity differ for different particle sizes and different levels (i.e. above and below the canopy); (3) the partitioning of particle deposition onto foliage and forest floor appears insensitive to friction velocity for particles smaller than 100 nm, but decreases with increasing friction velocity for particles large than 100 nm.
A perceptual space of local image statistics.
Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M
2015-12-01
Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.
A perceptual space of local image statistics
Victor, Jonathan D.; Thengone, Daniel J.; Rizvi, Syed M.; Conte, Mary M.
2015-01-01
Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice – a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14 min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4 min. In sum, local image statistics forms a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. PMID:26130606
NASA Astrophysics Data System (ADS)
Ali, Naseem; Aseyev, A.; McCraney, J.; Vuppuluri, V.; Abbass, O.; Al Jubaree, T.; Melius, M.; Cal, R. B.
2014-11-01
Hot-wire measurements obtained in a 3 × 3 wind turbine array boundary layer are utilized to analyze higher order statistics which include skewness, kurtosis as well as the ratios of structure functions and spectra. The ratios consist of wall-normal to streamwise components for both quantities. The aim is to understand the degree of anisotropy in the flow for the near- and far-wakes of the flow field where profiles at one diameter and five diameters are considered, respectively. The skewness at top tip for both wakes show a negative skewness while below the turbine canopy, this terms are positive. The kurtosis shows a Gaussian behavior in the near-wake immediately at hub-height. In addition, the effect due to the passage of the rotor in tandem with the shear layer at the top tip renders relatively high differences in the fourth order moment. The second order structure function and spectral ratios are found to exhibit anisotropic behavior at the top and bottom-tips for the large scales. Mixed structure functions and co-spectra are also considered in the context of isotropy.
Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A
2018-05-28
To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.
ERIC Educational Resources Information Center
Jaoul, Magali
2004-01-01
Mass education has the goal of guaranteeing the same education to all in order to moderate differences between individuals and promote a kind of "equality of opportunity." Nonetheless, it seems clear that lower-class youths do not benefit as much from their degree or university experience as do those who come from more privileged…
Source Camera Identification and Blind Tamper Detections for Images
2007-04-24
measures and image quality measures in camera identification problem was studied using conjunction with a KNN classifier to identify the feature sets...shots varying from nature scenes .-.. motorala to close-ups of people. We experimented with the KNN *~. * ny classifier (K=5) as well SVM algorithm of...on Acoustic, Speech and Signal Processing (ICASSP), France, May 2006, vol. 5, pp. 401-404. [9] H. Farid and S. Lyu, "Higher-order wavelet statistics
λ (Δim) -statistical convergence of order α
NASA Astrophysics Data System (ADS)
Colak, Rifat; Et, Mikail; Altin, Yavuz
2017-09-01
In this study, using the generalized difference operator Δim and a sequence λ = (λn) which is a non-decreasing sequence of positive numbers tending to ∞ such that λn+1 ≤ λn+1, λ1 = 1, we introduce the concepts of λ (Δim) -statistical convergence of order α (α ∈ (0, 1]) and strong λ (Δim) -Cesàro summablility of order α (α > 0). We establish some connections between λ (Δim) -statistical convergence of order α and strong λ (Δim) -Cesàro summablility of order α. It is shown that if a sequence is strongly λ (Δim) -Cesàro summable of order α, then it is λ (Δim) -statistically convergent of order β in case 0 < α ≤ β ≤ 1.
Robust Combining of Disparate Classifiers Through Order Statistics
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2001-01-01
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.
Full counting statistics of conductance for disordered systems
NASA Astrophysics Data System (ADS)
Fu, Bin; Zhang, Lei; Wei, Yadong; Wang, Jian
2017-09-01
Quantum transport is a stochastic process in nature. As a result, the conductance is fully characterized by its average value and fluctuations, i.e., characterized by full counting statistics (FCS). Since disorders are inevitable in nanoelectronic devices, it is important to understand how FCS behaves in disordered systems. The traditional approach dealing with fluctuations or cumulants of conductance uses diagrammatic perturbation expansion of the Green's function within coherent potential approximation (CPA), which is extremely complicated especially for high order cumulants. In this paper, we develop a theoretical formalism based on nonequilibrium Green's function by directly taking the disorder average on the generating function of FCS of conductance within CPA. This is done by mapping the problem into higher dimensions so that the functional dependence of generating a function on the Green's function becomes linear and the diagrammatic perturbation expansion is not needed anymore. Our theory is very simple and allows us to calculate cumulants of conductance at any desired order efficiently. As an application of our theory, we calculate the cumulants of conductance up to fifth order for disordered systems in the presence of Anderson and binary disorders. Our numerical results of cumulants of conductance show remarkable agreement with that obtained by the brute force calculation.
Koltun, G.F.
2013-01-01
This report presents the results of a study to assess potential water availability from the Atwood, Leesville, and Tappan Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for the Atwood Lake to 73 calendar years for the Leesville and Tappan Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October and February. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.
Modeling of the reactant conversion rate in a turbulent shear flow
NASA Technical Reports Server (NTRS)
Frankel, S. H.; Madnia, C. K.; Givi, P.
1992-01-01
Results are presented of direct numerical simulations (DNS) of spatially developing shear flows under the influence of infinitely fast chemical reactions of the type A + B yields Products. The simulation results are used to construct the compositional structure of the scalar field in a statistical manner. The results of this statistical analysis indicate that the use of a Beta density for the probability density function (PDF) of an appropriate Shvab-Zeldovich mixture fraction provides a very good estimate of the limiting bounds of the reactant conversion rate within the shear layer. This provides a strong justification for the implementation of this density in practical modeling of non-homogeneous turbulent reacting flows. However, the validity of the model cannot be generalized for predictions of higher order statistical quantities. A closed form analytical expression is presented for predicting the maximum rate of reactant conversion in non-homogeneous reacting turbulence.
Kankofer, M
2001-05-01
Glutathione peroxidase (GSH-Px), glutathione transferase (GSH-Tr), catalase (CAT) and superoxide dismutase (SOD)-the members of enzymatic antioxidative defence mechanisms against reactive oxygen species-may play an important role in proper or improper release of bovine fetal membranes. The aim of the following study was the determination of GSH-Px, GSH-Tr, CAT and SOD activity in order to define antioxidative status of bovine placenta during retention of fetal membranes (RFM) in cows. Placental samples were collected immediately after spontaneous parturition or during caesarean section before term and at term and divided into six groups as follows: A: caesarean section before term without RFM; B: caesarean section before term with RFM; C: caesarean section at term without RFM; D: caesarean section at term with RFM; E: spontaneous delivery at term without RFM; F: spontaneous delivery at term with RFM. The enzyme activities in placental homogenates were measured spectrophotometrically. GSH-Px activity was statistically significantly higher in fetal than in maternal placenta in all examined groups, increased towards parturition and was higher in caesarean section groups than spontaneous delivery groups. Statistically significantly higher activities were noticed in retained than not-retained placentae. GSH-Tr activity was significantly lower in fetal than in maternal placenta. In preterm groups, the activity was statistically significantly higher in retained than not retained placenta. In term groups, the opposite relationship was observed, higher values in caesarean section groups than spontaneous delivery were noticed. CAT activity was statistically significantly higher in fetal than in maternal part of placenta in all groups examined. The highest values in C and D groups and the differences between retained and not-retained placenta were observed. SOD exhibited the highest values in preterm placenta and alterations between retained and not-retained fetal membranes. In conclusion, the activities of GSH-Px, GSH-Tr, CAT and SOD are altered in cases of retained fetal membranes which may suggest the activation of antioxidative mechanisms caused by the imbalance between production and neutralization of reactive oxygen species. Copyright 2001 Harcourt Publishers Ltd.
Zhou, Shuxia; Evans, Brad; Schöneich, Christian; Singh, Satish K
2012-03-01
Trace amounts of metals are inevitably present in biotherapeutic products. They can arise from various sources. The impact of common formulation factors such as protein concentration, antioxidant, metal chelator concentration and type, surfactant, pH, and contact time with stainless steel on metal leachables was investigated by a design of experiments approach. Three major metal leachables, iron, chromium, and nickel were monitored by inductively coupled plasma-mass spectrometry. It was observed that among all the tested factors, contact time, metal chelator concentration, and protein concentration were statistically significant factors with higher temperature resulting in higher levels of leached metals. Within a pH range of 5.5-6.5, solution pH played a minor role for chromium leaching at 25°C. No statistically significant difference was observed due to type of chelator, presence of antioxidant, or surfactant. In order to optimize a biotherapeutic formulation to achieve a target drug product shelf life with acceptable quality, each formulation component must be evaluated for its impact.
NASA Astrophysics Data System (ADS)
Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.
2017-11-01
We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.
Sajjadi, Valleh; Ghoreishi, Mohammad; Jafarzadehpour, Ebrahim
2015-01-01
To compare the refractive and visual outcomes and higher order aberrations in patients with low to moderate myopia who underwent customized photorefractive keratectomy (PRK) or femtosecond laser in situ keratomileusis (Femto-LASIK) this research performed. This study includes data of 120 consecutive eyes of 60 patients with myopia between -3.00 D and -7.00 D with or without astigmatism in two surgery groups: PRK and Femto-LASIK. Refractive, visual, and aberration outcomes of the two methods of surgery were compared after 6 months of follow-up. After six months of follow-up, sphere and cylinder were found significantly decreased and there was no statistically significant difference between the two groups. The mean of uncorrected distance visual acuity in LogMar format for the PRK and Femto-LASIK groups was -0.03±0.07 and -0.01±0.08, respectively, which was not significantly different between the two groups. Higher orders and spherical aberrations increased in both groups significantly, while total aberrations decreased in both groups. After surgery, no differences were observed between the two groups in the amount of aberrations. In conclusion, Both PRK and Femto-LASIK are effective and safe in correcting myopia. In this study PRK induced more spherical and higher order aberrations than Femto-LASIK. PMID:27800501
Statistics of Atmospheric Circulations from Cumulant Expansions
NASA Astrophysics Data System (ADS)
Marston, B.; Sabou, F.
2010-12-01
Large-scale atmospheric flows are not so nonlinear as to preclude their direct statistical simulation (DSS) by systematic expansions in equal-time cumulants. Such DSS offers a number of advantages: (i) Low-order statistics are smoother in space and stiffer in time than the underlying instantaneous flows, hence statistically stationary or slowly varying fixed points can be described with fewer degrees of freedom and can also be accessed rapidly. (ii) Convergence with increasing resolution can be demonstrated. (iii) Finally and most importantly, DSS leads more directly to understanding, by integrating out fast modes, leaving only the slow modes that contain the most interesting information. This makes the approach ideal for simulating and understanding modes of the climate system, including changes in these modes that are driven by climate change. The equations of motion for the cumulants form an infinite hierarchy. The simplest closure is to set the third and higher order cumulants to zero. We extend previous work (Marston, Conover, and Schneider 2008) along these lines to two-layer models of the general circulation which has previously been argued to be only weakly nonlinear (O'Gorman and Schneider, 2006). Equal-time statistics so obtained agree reasonably well with those accumulated by direct numerical simulation (DNS) reproducing efficiently the midlatitude westerlies and storm tracks, tropical easterlies, and non-local teleconnection patterns (Marston 2010). Low-frequency modes of variability can also be captured. The primitive equation model of Held & Suarez, with and without latent heat release, is investigated, providing a test of whether DSS accurately reproduces the responses to simple climate forcings as found by DNS.
NASA Astrophysics Data System (ADS)
Borghesani, P.; Pennacchi, P.; Ricci, R.; Chatterton, S.
2013-10-01
Cyclostationary models for the diagnostic signals measured on faulty rotating machineries have proved to be successful in many laboratory tests and industrial applications. The squared envelope spectrum has been pointed out as the most efficient indicator for the assessment of second order cyclostationary symptoms of damages, which are typical, for instance, of rolling element bearing faults. In an attempt to foster the spread of rotating machinery diagnostics, the current trend in the field is to reach higher levels of automation of the condition monitoring systems. For this purpose, statistical tests for the presence of cyclostationarity have been proposed during the last years. The statistical thresholds proposed in the past for the identification of cyclostationary components have been obtained under the hypothesis of having a white noise signal when the component is healthy. This need, coupled with the non-white nature of the real signals implies the necessity of pre-whitening or filtering the signal in optimal narrow-bands, increasing the complexity of the algorithm and the risk of losing diagnostic information or introducing biases on the result. In this paper, the authors introduce an original analytical derivation of the statistical tests for cyclostationarity in the squared envelope spectrum, dropping the hypothesis of white noise from the beginning. The effect of first order and second order cyclostationary components on the distribution of the squared envelope spectrum will be quantified and the effectiveness of the newly proposed threshold verified, providing a sound theoretical basis and a practical starting point for efficient automated diagnostics of machine components such as rolling element bearings. The analytical results will be verified by means of numerical simulations and by using experimental vibration data of rolling element bearings.
Yu, Xiaojin; Liu, Pei; Min, Jie; Chen, Qiguang
2009-01-01
To explore the application of regression on order statistics (ROS) in estimating nondetects for food exposure assessment. Regression on order statistics was adopted in analysis of cadmium residual data set from global food contaminant monitoring, the mean residual was estimated basing SAS programming and compared with the results from substitution methods. The results show that ROS method performs better obviously than substitution methods for being robust and convenient for posterior analysis. Regression on order statistics is worth to adopt,but more efforts should be make for details of application of this method.
Cluster analysis as a prediction tool for pregnancy outcomes.
Banjari, Ines; Kenjerić, Daniela; Šolić, Krešimir; Mandić, Milena L
2015-03-01
Considering specific physiology changes during gestation and thinking of pregnancy as a "critical window", classification of pregnant women at early pregnancy can be considered as crucial. The paper demonstrates the use of a method based on an approach from intelligent data mining, cluster analysis. Cluster analysis method is a statistical method which makes possible to group individuals based on sets of identifying variables. The method was chosen in order to determine possibility for classification of pregnant women at early pregnancy to analyze unknown correlations between different variables so that the certain outcomes could be predicted. 222 pregnant women from two general obstetric offices' were recruited. The main orient was set on characteristics of these pregnant women: their age, pre-pregnancy body mass index (BMI) and haemoglobin value. Cluster analysis gained a 94.1% classification accuracy rate with three branch- es or groups of pregnant women showing statistically significant correlations with pregnancy outcomes. The results are showing that pregnant women both of older age and higher pre-pregnancy BMI have a significantly higher incidence of delivering baby of higher birth weight but they gain significantly less weight during pregnancy. Their babies are also longer, and these women have significantly higher probability for complications during pregnancy (gestosis) and higher probability of induced or caesarean delivery. We can conclude that the cluster analysis method can appropriately classify pregnant women at early pregnancy to predict certain outcomes.
Variant myopia: A new presentation?
Hussaindeen, Jameel Rizwana; Anand, Mithra; Sivaraman, Viswanathan; Ramani, Krishna Kumar; Allen, Peter M
2018-01-01
Purpose: Variant myopia (VM) presents as a discrepancy of >1 diopter (D) between subjective and objective refraction, without the presence of any accommodative dysfunction. The purpose of this study is to create a clinical profile of VM. Methods: Fourteen eyes of 12 VM patients who had a discrepancy of >1D between retinoscopy and subjective acceptance under both cycloplegic and noncycloplegic conditions were included in the study. Fourteen eyes of 14 age- and refractive error-matched participants served as controls. Potential participants underwent a comprehensive orthoptic examination followed by retinoscopy (Ret), closed-field autorefractor (CA), subjective acceptance (SA), choroidal and retinal thickness, ocular biometry, and higher order spherical aberrations measurements. Results: In the VM eyes, a statistically and clinically significant difference was noted between the Ret and CA and Ret and SA under both cycloplegic and noncycloplegic conditions (multivariate repeated measures analysis of variance, P < 0.0001). A statistically significant difference was observed between the VM eyes, non-VM eyes, and controls for choroidal thickness in all the quadrants (Univariate ANOVA P < 0.05). The VM eyes had thinner choroids (197.21 ± 13.04 μ) compared to the non-VM eyes (249.25 ± 53.70 μ) and refractive error-matched controls (264.62 ± 12.53 μ). No statistically significant differences between groups in root mean square of total higher order aberrations and spherical aberration were observed. Conclusion: Accommodative etiology does not play a role in the refractive discrepancy seen in individuals with the variant myopic presentation. These individuals have thinner choroids in the eye with variant myopic presentation compared to the fellow eyes and controls. Hypotheses and clinical implications of variant myopia are discussed. PMID:29785987
Topography and Higher Order Corneal Aberrations of the Fellow Eye in Unilateral Keratoconus.
Aksoy, Sibel; Akkaya, Sezen; Özkurt, Yelda; Kurna, Sevda; Açıkalın, Banu; Şengör, Tomris
2017-10-01
Comparison of topography and corneal higher order aberrations (HOA) data of fellow normal eyes of unilateral keratoconus patients with keratoconus eyes and control group. The records of 196 patients with keratoconus were reviewed. Twenty patients were identified as unilateral keratoconus. The best corrected visual acuity (BCVA), topography and aberration data of the unilateral keratoconus patients' normal eyes were compared with their contralateral keratoconus eyes and with control group eyes. For statistical analysis, flat and steep keratometry values, average corneal power, cylindrical power, surface regularity index (SRI), surface asymmetry index (SAI), inferior-superior ratio (I-S), keratoconus prediction index, and elevation-depression power (EDP) and diameter (EDD) topography indices were selected. Mean age of the unilateral keratoconus patients was 26.05±4.73 years and that of the control group was 23.6±8.53 years (p>0.05). There was no statistical difference in BCVA between normal and control eyes (p=0.108), whereas BCVA values were significantly lower in eyes with keratoconus (p=0.001). Comparison of quantitative topographic indices between the groups showed that all indices except the I-S ratio were significantly higher in the normal group than in the control group (p<0.05). The most obvious differences were in the SRI, SAI, EDP, and EDD values. All topographic indices were higher in the keratoconus eyes compared to the normal fellow eyes. There was no difference between normal eyes and the control group in terms of spherical aberration, while coma, trefoil, irregular astigmatism, and total HOA values were higher in the normal eyes of unilateral keratoconus patients (p<0.05). All HOA values were higher in keratoconus eyes than in the control group. According to our study, SRI, SAI, EDP, EDD values, and HOA other than spherical aberration were higher in the clinically and topographically normal fellow eyes of unilateral keratoconus patients when compared to a control group. This finding may be due to the mild asymmetric and morphologic changes in the subclinical stage of keratoconus leading to deterioration in the indicators of corneal irregularity and elevation changes. Therefore, these eyes may be exhibiting the early form of the disease.
On Nonlinear Functionals of Random Spherical Eigenfunctions
NASA Astrophysics Data System (ADS)
Marinucci, Domenico; Wigman, Igor
2014-05-01
We prove central limit theorems and Stein-like bounds for the asymptotic behaviour of nonlinear functionals of spherical Gaussian eigenfunctions. Our investigation combines asymptotic analysis of higher order moments for Legendre polynomials and, in addition, recent results on Malliavin calculus and total variation bounds for Gaussian subordinated fields. We discuss applications to geometric functionals like the defect and invariant statistics, e.g., polyspectra of isotropic spherical random fields. Both of these have relevance for applications, especially in an astrophysical environment.
Monitoring the impact of Bt maize on butterflies in the field: estimation of required sample sizes.
Lang, Andreas
2004-01-01
The monitoring of genetically modified organisms (GMOs) after deliberate release is important in order to assess and evaluate possible environmental effects. Concerns have been raised that the transgenic crop, Bt maize, may affect butterflies occurring in field margins. Therefore, a monitoring of butterflies was suggested accompanying the commercial cultivation of Bt maize. In this study, baseline data on the butterfly species and their abundance in maize field margins is presented together with implications for butterfly monitoring. The study was conducted in Bavaria, South Germany, between 2000-2002. A total of 33 butterfly species was recorded in field margins. A small number of species dominated the community, and butterflies observed were mostly common species. Observation duration was the most important factor influencing the monitoring results. Field margin size affected the butterfly abundance, and habitat diversity had a tendency to influence species richness. Sample size and statistical power analyses indicated that a sample size in the range of 75 to 150 field margins for treatment (transgenic maize) and control (conventional maize) would detect (power of 80%) effects larger than 15% in species richness and the butterfly abundance pooled across species. However, a much higher number of field margins must be sampled in order to achieve a higher statistical power, to detect smaller effects, and to monitor single butterfly species.
Application of higher-order cepstral techniques in problems of fetal heart signal extraction
NASA Astrophysics Data System (ADS)
Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.
1996-10-01
Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.
Equilibrium statistical-thermal models in high-energy physics
NASA Astrophysics Data System (ADS)
Tawfik, Abdel Nasser
2014-05-01
We review some recent highlights from the applications of statistical-thermal models to different experimental measurements and lattice QCD thermodynamics that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical-thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948, an almost complete recipe for the statistical-thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it, the simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analyzed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-particle systems can be studied with the help of statistical-thermal methods. The analysis of yield multiplicities in high-energy collisions gives an overwhelming evidence for the chemical equilibrium in the final state. The strange particles might be an exception, as they are suppressed at lower beam energies. However, their relative yields fulfill statistical equilibrium, as well. We review the equilibrium statistical-thermal models for particle production, fluctuations and collective flow in heavy-ion experiments. We also review their reproduction of the lattice QCD thermodynamics at vanishing and finite chemical potential. During the last decade, five conditions have been suggested to describe the universal behavior of the chemical freeze-out parameters. The higher order moments of multiplicity have been discussed. They offer deep insights about particle production and to critical fluctuations. Therefore, we use them to describe the freeze-out parameters and suggest the location of the QCD critical endpoint. Various extensions have been proposed in order to take into consideration the possible deviations of the ideal hadron gas. We highlight various types of interactions, dissipative properties and location-dependences (spatial rapidity). Furthermore, we review three models combining hadronic with partonic phases; quasi-particle model, linear sigma model with Polyakov potentials and compressible bag model.
Wavefront-Guided Scleral Lens Correction in Keratoconus
Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.
2014-01-01
Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371
Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
Groundwater nitrate contamination: Factors and indicators
Wick, Katharina; Heumesser, Christine; Schmid, Erwin
2012-01-01
Identifying significant determinants of groundwater nitrate contamination is critical in order to define sensible agri-environmental indicators that support the design, enforcement, and monitoring of regulatory policies. We use data from approximately 1200 Austrian municipalities to provide a detailed statistical analysis of (1) the factors influencing groundwater nitrate contamination and (2) the predictive capacity of the Gross Nitrogen Balance, one of the most commonly used agri-environmental indicators. We find that the percentage of cropland in a given region correlates positively with nitrate concentration in groundwater. Additionally, environmental characteristics such as temperature and precipitation are important co-factors. Higher average temperatures result in lower nitrate contamination of groundwater, possibly due to increased evapotranspiration. Higher average precipitation dilutes nitrates in the soil, further reducing groundwater nitrate concentration. Finally, we assess whether the Gross Nitrogen Balance is a valid predictor of groundwater nitrate contamination. Our regression analysis reveals that the Gross Nitrogen Balance is a statistically significant predictor for nitrate contamination. We also show that its predictive power can be improved if we account for average regional precipitation. The Gross Nitrogen Balance predicts nitrate contamination in groundwater more precisely in regions with higher average precipitation. PMID:22906701
Learning curve of thyroid fine-needle aspiration biopsy.
Penín, Manuel; Martín, M Ángeles; San Millán, Beatriz; García, Juana
2017-12-01
Fine-needle aspiration biopsy (FNAB) is the reference procedure for thyroid nodule evaluation. Its main limitation are inadequate samples, which should be less than 20%. To analyze the learning curve of the procedure by comparing the results of a non-experienced endocrinologist (endocrinologist 2) to those of an experienced one (endocrinologist 1). Sixty FNABs were analyzed from February to June 2016. Each endocrinologist made 2punctures of every nodule in a random order. This order and the professional making every puncture were unknown to the pathologist who examined the samples. Endocrinologist 1 had a higher percentage of diagnoses than endocrinologist 2 (82% vs. 72%, P=.015). In the first 20 FNABs, the difference between both physicians was remarkable and statistically significant (80% vs. 50%, P=.047). In the following 20 FNABs, the difference narrowed and was not statistically significant (90% vs. 65%, P=.058). In the final 20 FNABs, the difference was minimal and not statistically significant (75% vs. 70%, P=.723). The learning curve of ultrasound-guided FNAB may be completed in a suitable environment by performing it at least 60 times. Although the guidelines recommend at least 3punctures per nodule, 2are enough to achieve an accurate percentage of diagnoses. Copyright © 2017 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
di Luca, Alejandro; de Elía, Ramón; Laprise, René
2012-03-01
Regional Climate Models (RCMs) constitute the most often used method to perform affordable high-resolution regional climate simulations. The key issue in the evaluation of nested regional models is to determine whether RCM simulations improve the representation of climatic statistics compared to the driving data, that is, whether RCMs add value. In this study we examine a necessary condition that some climate statistics derived from the precipitation field must satisfy in order that the RCM technique can generate some added value: we focus on whether the climate statistics of interest contain some fine spatial-scale variability that would be absent on a coarser grid. The presence and magnitude of fine-scale precipitation variance required to adequately describe a given climate statistics will then be used to quantify the potential added value (PAV) of RCMs. Our results show that the PAV of RCMs is much higher for short temporal scales (e.g., 3-hourly data) than for long temporal scales (16-day average data) due to the filtering resulting from the time-averaging process. PAV is higher in warm season compared to cold season due to the higher proportion of precipitation falling from small-scale weather systems in the warm season. In regions of complex topography, the orographic forcing induces an extra component of PAV, no matter the season or the temporal scale considered. The PAV is also estimated using high-resolution datasets based on observations allowing the evaluation of the sensitivity of changing resolution in the real climate system. The results show that RCMs tend to reproduce relatively well the PAV compared to observations although showing an overestimation of the PAV in warm season and mountainous regions.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.
Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei
2017-07-01
Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.
Restoration of MRI data for intensity non-uniformities using local high order intensity statistics
Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert
2008-01-01
MRI at high magnetic fields (>3.0 T) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to non-biological intensity non-uniformities across the image. They can complicate further image analysis such as registration and tissue segmentation. Existing methods for intensity uniformity restoration have been optimized for 1.5 T, but they are less effective for 3.0 T MRI, and not at all satisfactory for higher fields. Also, many of the existing restoration algorithms require a brain template or use a prior atlas, which can restrict their practicalities. In this study an effective intensity uniformity restoration algorithm has been developed based on non-parametric statistics of high order local intensity co-occurrences. These statistics are restored with a non-stationary Wiener filter. The algorithm also assumes a smooth non-uniformity and is stable. It does not require a prior atlas and is robust to variations in anatomy. In geriatric brain imaging it is robust to variations such as enlarged ventricles and low contrast to noise ratio. The co-occurrence statistics improve robustness to whole head images with pronounced non-uniformities present in high field acquisitions. Its significantly improved performance and lower time requirements have been demonstrated by comparing it to the very commonly used N3 algorithm on BrainWeb MR simulator images as well as on real 4 T human head images. PMID:18621568
The impact of diabetes mellitus on the course and outcome of pregnancy during a 5-year follow-up.
Mitrović, Milena; Stojić, Siniša; Tešić, Dragan S; Popović, Djordje; Rankov, Olivera; Naglić, Dragana Tomić; Paro, Jovanka Novaković; Pejin, Radoslav; Bulatović, Sanja; Veljić, Mašsa Todorović; Zavišić, Branka Kovarev
2014-10-01
Women with diabetes, especially diabetes type 1, have worse pregnancy outcomes, as well as increased incidence of spontaneous abortions, pre-eclampsia, fetal macrosomia, preterm delivery, congenital anomalies and perinatal mortality. The aim of this study was to analyze the course and outcome of pregnancy in the patients with diabetes in relation to the group of healthy women regarding preterm delivery, perinatal morbidity and mortality. Also, the aim was to compare pregnancy outcomes in the patients with pre-existing diabetes type 1 and the patients with gestational and diabetes type 2. This retrospective study included 156 diabetic women treated at the Clinic of Endocrinology, Diabetes and Metabolic Diseases and Gynecology and Obstetrics Clinic of the Clinical Center of Vojvodina from 2006 to 2010. There were 94 patients with gestational diabetes, 48 with type 1 diabetes, and 14 patients with type 2 diabetes. The control group included 106 healthy women hospitalized at the Gynecology and Obstetrics Clinic. The women with type 1 diabetes presented with a statistically significantly higher incidence of cesarean section than those without diabetes, or with type 2 or gestational diabetes (p < 0.0001); the women with type 1 diabetes delivered at an earlier week of gestation (WG) in regard to women without diabetes, or with type 2 or gestational diabetes (p = 0.0017 and p = 0.02, respectively). The incidence of perinatal morbidity: hypoglycemia (p < 0.001), pathological jaundice (p = 0.0021), and other neonatal pathologies at birth (p = 0.0031), was statistically significantly higher and Apgar scores after 1 minute (p = 0.0142) and after 5 minutes (p = 0.0003) were statistically significantly lower in the patients with diabetes compared to the healthy women. The women with type 2 and gestational diabetes were statistically significantly older than those with type 1 diabetes (p = 0.001). A higher incidence of fetal macrosomia in the women with gestational and type 2 diabetes compared to those with type 1 diabetes was at the borderline of statistical significance (p = 0.07), whereas the incidence of hypoglycemia of newborn was statistically significantly higher in the patients with type 1 diabetes (p < 0.0001). Glycosylated hemoglobin (HbA1c) levels were statistically significantly higher in the diabetic women giving birth during and before the week of gestation 36 (p = 0.0087), but there were no differences in HbA1lc levels in regard to fetal macrosomia (p = 0.45) and congenital abnormalities (p = 0.32). The results of our study show a higher incidence of perinatal fetal morbidity (hypoglycemia, jaundice, respiratory distress syndrome) in the patients with type 1, type 2 and gestation diabetes than in the healthy controls. Also, we found a higher incidence of cesarean section in the patients with type 1 diabetes than in those with type 2, gestation diabetes and healthy controls. Although delivery in the patients with type 1, type 2 and gestational diabetes was completed approximately one to two weeks earlier compared to the healthy controls there was no statistically significant difference in the incidence of preterm delivery (≤ 36th week of gestation) between the women with diabetes and healthy controls. Preterm delivery associated with poorer glycaemic control reflected through higher values of HbA1c in third trimester. Risks from adverse pregnancy outcomes may be reduced to minimum by adequate preconception counseling of diabetic patients and early diagnosis of diabetes in pregnancy, in order to achieve glycemic control during organogenesis and within pregnancy and through the teamwork of endocrinologists, gynecologists and pediatricians.
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
Invigorating self-regulated learning strategies of mathematics among higher education students
NASA Astrophysics Data System (ADS)
Chechi, Vijay Kumar; Bhalla, Jyoti
2017-07-01
The global market is transforming at its ever-increasing rate of knots. Consequently, the work skills challenges that current students will encounter throughout their lifetimes will be drastically different from those of present and past and proffering new-fangled opportunities and posing new challenges. However, in order to deal with tomorrow's opportunities and challenges students ought to equip with higher order cognitive skills which are substantially different from those needed in the past. In order to accomplish this intention, students must be academically self-regulated, as academic self-regulation is playing a vital role for academic success, particularly in higher education. Students must be prepared in such a way that they should take responsibility for their own learning. Self-regulation suggests activities and thinking processes that learners can engage in during his learning. Self-regulation is encompassing a number of inter-dependent aspect viz. affective beliefs, cognition and meta-cognitive skills [1]. It helps the learners to make sagacious use of their intellect and expertise [2]. As statistics has shown that the achievement of students in mathematics has persistently been poor. Along with it, mathematics is considered as one of the most important subject course in architecture, agriculture, medicine, pharmacy and especially in engineering. In spite of its importance, most of the students considered it as a dull and dry subject and their performance is remarkably low and alarming. Therefore, the present paper will highlight various factors affecting performance of higher education students in mathematics and will suggest different self-regulated learning strategies which will act as boon for higher education students.
[Seroprevalence of Q fever among the adult population of Lanzarote (Canary Islands)].
Pascual Velasco, F; Rodríguez Pérez, J C; Otero Ferrio, I; Borobio Enciso, M V
1992-09-01
Q fever is an endemic zoonosis in the Canary Islands. In 1986, we detected, in a pilot study, residual antibodies of the infection in 3% of the population from Lanzarote. In 1989, we performed a new study in order to assess seroprevalence of Q fever among the adult native population from the island. We studied 390 human serums obtained from an statistically representative sample. Age ranged from 30 to 64 years. Out of 390 serums, 196 (50.25%) were obtained from men and 194 (49.74%) from women. The serological technique used was the fixation of complement using Coxiella burnetii antigens in phase II. Titres equal or higher than 1/8 were considered positive. No statistically significant differences were observed with regard to seroprevalence rates considering sex, age, nor living in or outside the island's capital city. However, when dividing the island's territory in three areas (north, centre and south), and assessing independently their respective seroprevalences, we observed relatively higher seroprevalences in the furthest areas (13.3% in the north and 13.5% in the south) than in the central area (4.7%), although only the higher seroprevalence in the south reached statistical significance when compared with the mean prevalence. Probably, these observations indicate that, although Q fever is extended all over the island, it is a more frequent infection in rural areas of Lanzarote, at the north and the south, than in the central area, where the main urban areas are located.
NASA Astrophysics Data System (ADS)
Oberlack, Martin; Rosteck, Andreas; Avsarkisov, Victor
2013-11-01
Text-book knowledge proclaims that Lie symmetries such as Galilean transformation lie at the heart of fluid dynamics. These important properties also carry over to the statistical description of turbulence, i.e. to the Reynolds stress transport equations and its generalization, the multi-point correlation equations (MPCE). Interesting enough, the MPCE admit a much larger set of symmetries, in fact infinite dimensional, subsequently named statistical symmetries. Most important, theses new symmetries have important consequences for our understanding of turbulent scaling laws. The symmetries form the essential foundation to construct exact solutions to the infinite set of MPCE, which in turn are identified as classical and new turbulent scaling laws. Examples on various classical and new shear flow scaling laws including higher order moments will be presented. Even new scaling have been forecasted from these symmetries and in turn validated by DNS. Turbulence modellers have implicitly recognized at least one of the statistical symmetries as this is the basis for the usual log-law which has been employed for calibrating essentially all engineering turbulence models. An obvious conclusion is to generally make turbulence models consistent with the new statistical symmetries.
A scoring system for ascertainment of incident stroke; the Risk Index Score (RISc).
Kass-Hout, T A; Moyé, L A; Smith, M A; Morgenstern, L B
2006-01-01
The main objective of this study was to develop and validate a computer-based statistical algorithm that could be translated into a simple scoring system in order to ascertain incident stroke cases using hospital admission medical records data. The Risk Index Score (RISc) algorithm was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christi (BASIC) project, 2000. The validity of RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment by physician and/or abstractor review of hospital admission records. RISc was developed on 1718 randomly selected patients (training set) and then statistically validated on an independent sample of 858 patients (validation set). A multivariable logistic model was used to develop RISc and subsequently evaluated by goodness-of-fit and receiver operating characteristic (ROC) analyses. The higher the value of RISc, the higher the patient's risk of potential stroke. The study showed RISc was well calibrated and discriminated those who had potential stroke from those that did not on initial screening. In this study we developed and validated a rapid, easy, efficient, and accurate method to ascertain incident stroke cases from routine hospital admission records for epidemiologic investigations. Validation of this scoring system was achieved statistically; however, clinical validation in a community hospital setting is warranted.
The three dimensional distribution of chromium and nickel alloy welding fumes.
Mori, T; Matsuda, A; Akashi, S; Ogata, M; Takeoka, K; Yoshinaka, M
1991-08-01
In the present study, the fumes generated from manual metal arc (MMA) and submerged metal arc (SMA) welding of low temperature service steel, and the chromium and nickel percentages in these fumes, were measured at various horizontal distances and vertical heights from the arc in order to obtain a three dimensional distribution. The MMA welding fume concentrations were significantly higher than the SMA welding fume concentrations. The highest fume concentration on the horizontal was shown in the fumes collected directly above the arc. The fume concentration vertically was highest at 50 cm height and reduced by half at 150 cm height. The fume concentration at 250 cm height was scarcely different from that at 150 cm height. The distribution of the chromium concentration vertically was analogous to the fume concentration, and a statistically significant difference in the chromium percentages was not found at the different heights. The nickel concentrations were not statistically significant within the welding processes, but the nickel percentages in the SMA welding fumes were statistically higher than in the MMA welding fumes. The highest nickel concentration on the horizontal was found in the fumes collected directly above the arc. The highest nickel concentration vertically showed in the fume samples collected at 50 cm height, but the greater the height the larger the nickel percentage in the fumes.
Silva, Bruna Mariáh da S E; Morales, Gundisalvo P; Gutjahr, Ana Lúcia N; Freitas Faial, Kelson do C; Carneiro, Bruno S
2018-03-14
In this study, trace element concentrations were measured in chelipod and gill samples of the crab U. cordatus by induced coupled plasma optical emission spectrometry (ICP OES). The element average concentrations between the structures were statistically compared. Gill concentrations of Cu and Zn were higher in female crabs, while in chelipods, Pb concentrations were higher in males. The concentration of Zn in crabs from Curuçá City were higher than the recommended by health agencies, but the provisional tolerable daily intake value (PTDI), for Zn and Cu, showed only 10 and 23% contribution, respectively. The bioaccumulation factor was higher than 1 for Cu (gills and chelipods) and Zn (only for chelipods), which suggests bioaccumulation for these elements. Further metallomic and oxidative stress analyses are suggested, in order to evaluate possible protein and/or enzymatic biomarkers of toxicity.
Devos, Olivier; Downey, Gerard; Duponchel, Ludovic
2014-04-01
Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ladaniuk, Anatolii; Ivashchuk, Viacheslav; Kisała, Piotr; Askarova, Nursanat; Sagymbekova, Azhar
2015-12-01
Conditions of diversification of enterprise products are involving for changes of higher levels of management hierarchy, so it's leading by tasks correcting and changing schedule for operating of production plans. Ordinary solve by combination of enterprise resource are planning and management execution system often has exclusively statistical content. So, the development of decision support system, that helps to use knowledge about subject for capabilities estimating and order of operation of production object is relevant in this time.
Statistics Report on TEQSA Registered Higher Education Providers, 2017
ERIC Educational Resources Information Center
Australian Government Tertiary Education Quality and Standards Agency, 2017
2017-01-01
The "Statistics Report on TEQSA Registered Higher Education Providers" ("the Statistics Report") is the fourth release of selected higher education sector data held by is the fourth release of selected higher education sector data held by the Australian Government Tertiary Education Quality and Standards Agency (TEQSA) for its…
Residual confounding explains the association between high parity and child mortality.
Kozuki, Naoko; Sonneveldt, Emily; Walker, Neff
2013-01-01
This study used data from recent Demographic and Health Surveys (DHS) to examine the impact of high parity on under-five and neonatal mortality. The analyses used various techniques to attempt eliminating selection issues, including stratification of analyses by mothers' completed fertility. We analyzed DHS datasets from 47 low- and middle-income countries. We only used data from women who were age 35 or older at the time of survey to have a measure of their completed fertility. We ran log-binominal regression by country to calculate relative risk between parity and both under-five and neonatal mortality, controlled for wealth quintile, maternal education, urban versus rural residence, maternal age at first birth, calendar year (to control for possible time trends), and birth interval. We then controlled for maternal background characteristics even further by using mothers' completed fertility as a proxy measure. We found a statistically significant association between high parity and child mortality. However, this association is most likely not physiological, and can be largely attributed to the difference in background characteristics of mothers who complete reproduction with high fertility versus low fertility. Children of high completed fertility mothers have statistically significantly increased risk of death compared to children of low completed fertility mothers at every birth order, even after controlling for available confounders (i.e. among children of birth order 1, adjusted RR of under-five mortality 1.58, 95% CI: 1.42, 1.76). There appears to be residual confounders that put children of high completed fertility mothers at higher risk, regardless of birth order. When we examined the association between parity and under-five mortality among mothers with high completed fertility, it remained statistically significant, but negligible in magnitude (i.e. adjusted RR of under-five mortality 1.03, 95% CI: 1.02-1.05). Our analyses strongly suggest that the observed increased risk of mortality associated with high parity births is not driven by a physiological link between parity and mortality. We found that at each birth order, children born to women who have high fertility at the end of their reproductive period are at significantly higher mortality risk than children of mothers who have low fertility, even after adjusting for available confounders. With each unit increase in birth order, a larger proportion of births at the population level belongs to mothers with these adverse characteristics correlated with high fertility. Hence it appears as if mortality rates go up with increasing parity, but not for physiological reasons.
Mastropasqua, L; Toto, L; Zuppardi, E; Nubile, M; Carpineto, P; Di Nicola, M; Ballone, E
2006-01-01
To evaluate the refractive and aberrometric outcome of wavefront-guided photorefractive keratectomy (PRK) compared to standard PRK in myopic patients. Fifty-six eyes of 56 patients were included in the study and were randomly divided into two groups. The study group consisted of 28 eyes with a mean spherical equivalent (SE) of -2.25+/-0.76 diopters (D) (range: -1.5 to -3.5 D) treated with wavefront-guided PRK using the Zywave ablation profile and the Bausch & Lomb Technolas 217z excimer laser (Zyoptix system) and the control group included 28 eyes with a SE of -2.35+/-1.01 D (range: -1.5 to -3.5 D) treated with standard PRK (PlanoScan ablation) using the same laser. A Zywave aberrometer was used to analyze and calculate the root-mean-square (RMS) of total high order aberrations (HOA) and Zernike coefficients of third and fourth order before and after (over a 6-month follow-up period) surgery in both groups. Preoperative and postoperative SE, un-corrected visual acuity (UCVA), and best-corrected visual acuity (BCVA) were evaluated in all cases. There was a high correlation between achieved and intended correction. The differences between the two treatment groups were not statistically significant for UCVA, BCVA, or SE cycloplegic refraction . Postoperatively the RMS value of high order aberrations was raised in both groups. At 6-month control, on average it increased by a factor of 1.17 in the Zyoptix PRK group and 1.54 in the PlanoScan PRK group (p=0.22). In the Zyoptix group there was a decrease of coma aberration, while in the PlanoScan group this third order aberration increased. The difference between postoperative and preoperative values between the two groups was statistically significant for coma aberration (p=0.013). No statistically significant difference was observed for spherical-like aberration between the two groups. In the study group eyes with a low amount of preoperative aberrations (HOA RMS lower than the median value; <0.28 microm) showed an increase of HOA RMS while eyes with RMS higher than 0.28 microm showed a decrease (p<0.05). Zyoptix wavefront-guided PRK is as safe and efficacious for the correction of myopia and myopic astigmatism as PlanoScan PRK. Moreover this technique induces a smaller increase of third order coma aberration compared to standard PRK. The use of Zyoptix wavefront-guided PRK is particularly indicated in eyes with higher preoperative RMS values.
NASA Astrophysics Data System (ADS)
Yousefzadeh, Hoorvash Camilia; Lecomte, Roger; Fontaine, Réjean
2012-06-01
A fast Wiener filter-based crystal identification (WFCI) algorithm was recently developed to discriminate crystals with close scintillation decay times in phoswich detectors. Despite the promising performance of WFCI, the influence of various physical factors and electrical noise sources of the data acquisition chain (DAQ) on the crystal identification process was not fully investigated. This paper examines the effect of different noise sources, such as photon statistics, avalanche photodiode (APD) excess multiplication noise, and front-end electronic noise, as well as the influence of different shaping filters on the performance of the WFCI algorithm. To this end, a PET-like signal simulator based on a model of the LabPET DAQ, a small animal APD-based digital PET scanner, was developed. Simulated signals were generated under various noise conditions with CR-RC shapers of order 1, 3, and 5 having different time constants (τ). Applying the WFCI algorithm to these simulated signals showed that the non-stationary Poisson photon statistics is the main contributor to the identification error of WFCI algorithm. A shaping filter of order 1 with τ = 50 ns yielded the best WFCI performance (error 1%), while a longer shaping time of τ = 100 ns slightly degraded the WFCI performance (error 3%). Filters of higher orders with fast shaping time constants (10-33 ns) also produced good WFCI results (error 1.4% to 1.6%). This study shows the advantage of the pulse simulator in evaluating various DAQ conditions and confirms the influence of the detection chain on the WFCI performance.
High-order statistics of weber local descriptors for image representation.
Han, Xian-Hua; Chen, Yen-Wei; Xu, Gang
2015-06-01
Highly discriminant visual features play a key role in different image classification applications. This study aims to realize a method for extracting highly-discriminant features from images by exploring a robust local descriptor inspired by Weber's law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber's law, and then explore a local patch, called micro-Texton, in the transformed domain as Weber local descriptor (WLD). Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the WLD space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for images. In order to validate the efficiency of the proposed strategy, we apply three different image classification applications including texture, food images and HEp-2 cell pattern recognition, which validates that our proposed strategy has advantages over the state-of-the-art approaches.
An Intercomparison of the Dynamical Cores of Global Atmospheric Circulation Models for Mars
NASA Technical Reports Server (NTRS)
Hollingsworth, Jeffery L.; Bridger, Alison F. C.; Haberle, Robert M.
1998-01-01
This is a Final Report for a Joint Research Interchange (JRI) between NASA Ames Research Center and San Jose State University, Department of Meteorology. The focus of this JRI has been to evaluate the dynamical 'cores' of two global atmospheric circulation models for Mars that are in operation at the NASA Ames Research Center. The two global circulation models in use are fundamentally different: one uses spherical harmonics in its horizontal representation of field variables; the other uses finite differences on a uniform longitude-latitude grid. Several simulations have been conducted to assess how the dynamical processors of each of these circulation models perform using identical 'simple physics' parameterizations. A variety of climate statistics (e.g., time-mean flows and eddy fields) have been compared for realistic solstitial mean basic states. Results of this research have demonstrated that the two Mars circulation models with completely different spatial representations and discretizations produce rather similar circulation statistics for first-order meteorological fields, suggestive of a tendency for convergence of numerical solutions. Second and higher-order fields can, however, vary significantly between the two models.
An Intercomparison of the Dynamical Cores of Global Atmospheric Circulation Models for Mars
NASA Technical Reports Server (NTRS)
Hollingsworth, Jeffery L.; Bridger, Alison F. C.; Haberle, Robert M.
1998-01-01
This is a Final Report for a Joint Research Interchange (JRI) between NASA Ames Research Cen- ter and San Jose State University, Department of Meteorology. The focus of this JRI has been to evaluate the dynamical "cores" of two global atmospheric circulation models for Mars that are in operation at the NASA Ames Research Center. ne two global circulation models in use are fundamentally different: one uses spherical harmonics in its horizontal representation of field variables; the other uses finite differences on a uniform longitude-latitude grid. Several simulations have been conducted to assess how the dynamical processors of each of these circulation models perform using identical "simple physics" parameterizations. A variety of climate statistics (e.g., time-mean flows and eddy fields) have been compared for realistic solstitial mean basic states. Results of this research have demonstrated that the two Mars circulation models with completely different spatial representations and discretizations produce rather similar circulation statistics for first-order meteorological fields, suggestive of a tendency for convergence of numerical solutions. Second and higher-order fields can, however, vary significantly between the two models.
UniEnt: uniform entropy model for the dynamics of a neuronal population
NASA Astrophysics Data System (ADS)
Hernandez Lahme, Damian; Nemenman, Ilya
Sensory information and motor responses are encoded in the brain in a collective spiking activity of a large number of neurons. Understanding the neural code requires inferring statistical properties of such collective dynamics from multicellular neurophysiological recordings. Questions of whether synchronous activity or silence of multiple neurons carries information about the stimuli or the motor responses are especially interesting. Unfortunately, detection of such high order statistical interactions from data is especially challenging due to the exponentially large dimensionality of the state space of neural collectives. Here we present UniEnt, a method for the inference of strengths of multivariate neural interaction patterns. The method is based on the Bayesian prior that makes no assumptions (uniform a priori expectations) about the value of the entropy of the observed multivariate neural activity, in contrast to popular approaches that maximize this entropy. We then study previously published multi-electrode recordings data from salamander retina, exposing the relevance of higher order neural interaction patterns for information encoding in this system. This work was supported in part by Grants JSMF/220020321 and NSF/IOS/1208126.
NASA Astrophysics Data System (ADS)
Romenskyy, Maksym; Lobaskin, Vladimir
2013-03-01
We study dynamic self-organisation and order-disorder transitions in a two-dimensional system of self-propelled particles. Our model is a variation of the Vicsek model, where particles align the motion to their neighbours but repel each other at short distances. We use computer simulations to measure the orientational order parameter for particle velocities as a function of intensity of internal noise or particle density. We show that in addition to the transition to an ordered state on increasing the particle density, as reported previously, there exists a transition into a disordered phase at the higher densities, which can be attributed to the destructive action of the repulsions. We demonstrate that the transition into the ordered phase is accompanied by the onset of algebraic behaviour of the two-point velocity correlation function and by a non-monotonous variation of the velocity relaxation time. The critical exponent for the decay of the velocity correlation function in the ordered phase depends on particle concentration at low densities but assumes a universal value in more dense systems.
A Test of Macromolecular Crystallization in Microgravity: Large, Well-Ordered Insulin Crystals
NASA Technical Reports Server (NTRS)
Borgstahl, Gloria E. O.; Vahedi-Faridi, Ardeschir; Lovelace, Jeff; Bellamy, Henry D.; Snell, Edward H.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Crystals of insulin grown in microgravity on space shuttle mission STS-95 were extremely well-ordered and unusually large (many > 2 mm). The physical characteristics of six microgravity and six earth-grown crystals were examined by X-ray analysis employing superfine f slicing and unfocused synchrotron radiation. This experimental setup allowed hundreds of reflections to be precisely examined for each crystal in a short period of time. The microgravity crystals were on average 34 times larger, had 7 times lower mosaicity, had 54 times higher reflection peak heights and diffracted to significantly higher resolution than their earth grown counterparts. A single mosaic domain model could account for reflections in microgravity crystals whereas reflections from earth crystals required a model with multiple mosaic domains. This statistically significant and unbiased characterization indicates that the microgravity environment was useful for the improvement of crystal growth and resultant diffraction quality in insulin crystals and may be similarly useful for macromolecular crystals in general.
Plasma turbulence and coherent structures in the polar cap observed by the ICI-2 sounding rocket
NASA Astrophysics Data System (ADS)
Spicher, A.; Miloch, W. J.; Clausen, L. B. N.; Moen, J. I.
2015-12-01
The electron density data from the ICI-2 sounding rocket experiment in the high-latitude F region ionosphere are analyzed using the higher-order spectra and higher-order statistics. Two regions of enhanced fluctuations are chosen for detailed analysis: the trailing edge of a polar cap patch and an electron density enhancement associated with particle precipitation. While these two regions exhibit similar power spectra, our analysis reveals that their internal structures are significantly different. The structures on the edge of the polar cap patch are likely due to nonlinear wave interactions since this region is characterized by intermittency and significant coherent mode coupling. The plasma enhancement subjected to precipitation, however, exhibits stronger random characteristics with uncorrelated phases of density fluctuations. These results suggest that particle precipitation plays a fundamental role in ionospheric plasma structuring creating turbulent-like structures. We discuss the physical mechanisms that cause plasma structuring as well as the possible processes for the low-frequency part of the spectrum in terms of plasma instabilities.
Analysis and automatic identification of sleep stages using higher order spectra.
Acharya, U Rajendra; Chua, Eric Chern-Pin; Chua, Kuang Chua; Min, Lim Choo; Tamura, Toshiyo
2010-12-01
Electroencephalogram (EEG) signals are widely used to study the activity of the brain, such as to determine sleep stages. These EEG signals are nonlinear and non-stationary in nature. It is difficult to perform sleep staging by visual interpretation and linear techniques. Thus, we use a nonlinear technique, higher order spectra (HOS), to extract hidden information in the sleep EEG signal. In this study, unique bispectrum and bicoherence plots for various sleep stages were proposed. These can be used as visual aid for various diagnostics application. A number of HOS based features were extracted from these plots during the various sleep stages (Wakefulness, Rapid Eye Movement (REM), Stage 1-4 Non-REM) and they were found to be statistically significant with p-value lower than 0.001 using ANOVA test. These features were fed to a Gaussian mixture model (GMM) classifier for automatic identification. Our results indicate that the proposed system is able to identify sleep stages with an accuracy of 88.7%.
Design of an image encryption scheme based on a multiple chaotic map
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun
2013-07-01
In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.
Volatilities, Traded Volumes, and Price Increments in Derivative Securities
NASA Astrophysics Data System (ADS)
Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico
2007-03-01
We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.
Volatilities, traded volumes, and the hypothesis of price increments in derivative securities
NASA Astrophysics Data System (ADS)
Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik
2007-08-01
A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Multi-pulse multi-delay (MPMD) multiple access modulation for UWB
Dowla, Farid U.; Nekoogar, Faranak
2007-03-20
A new modulation scheme in UWB communications is introduced. This modulation technique utilizes multiple orthogonal transmitted-reference pulses for UWB channelization. The proposed UWB receiver samples the second order statistical function at both zero and non-zero lags and matches the samples to stored second order statistical functions, thus sampling and matching the shape of second order statistical functions rather than just the shape of the received pulses.
ERIC Educational Resources Information Center
Green, Jeffrey J.; Stone, Courtenay C.; Zegeye, Abera; Charles, Thomas A.
2009-01-01
Because statistical analysis requires the ability to use mathematics, students typically are required to take one or more prerequisite math courses prior to enrolling in the business statistics course. Despite these math prerequisites, however, many students find it difficult to learn business statistics. In this study, we use an ordered probit…
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newsom, R. K.; Sivaraman, C.; Shippert, T. R.
Accurate height-resolved measurements of higher-order statistical moments of vertical velocity fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosismore » from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.« less
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
Protein domain organisation: adding order.
Kummerfeld, Sarah K; Teichmann, Sarah A
2009-01-29
Domains are the building blocks of proteins. During evolution, they have been duplicated, fused and recombined, to produce proteins with novel structures and functions. Structural and genome-scale studies have shown that pairs or groups of domains observed together in a protein are almost always found in only one N to C terminal order and are the result of a single recombination event that has been propagated by duplication of the multi-domain unit. Previous studies of domain organisation have used graph theory to represent the co-occurrence of domains within proteins. We build on this approach by adding directionality to the graphs and connecting nodes based on their relative order in the protein. Most of the time, the linear order of domains is conserved. However, using the directed graph representation we have identified non-linear features of domain organization that are over-represented in genomes. Recognising these patterns and unravelling how they have arisen may allow us to understand the functional relationships between domains and understand how the protein repertoire has evolved. We identify groups of domains that are not linearly conserved, but instead have been shuffled during evolution so that they occur in multiple different orders. We consider 192 genomes across all three kingdoms of life and use domain and protein annotation to understand their functional significance. To identify these features and assess their statistical significance, we represent the linear order of domains in proteins as a directed graph and apply graph theoretical methods. We describe two higher-order patterns of domain organisation: clusters and bi-directionally associated domain pairs and explore their functional importance and phylogenetic conservation. Taking into account the order of domains, we have derived a novel picture of global protein organization. We found that all genomes have a higher than expected degree of clustering and more domain pairs in forward and reverse orientation in different proteins relative to random graphs with identical degree distributions. While these features were statistically over-represented, they are still fairly rare. Looking in detail at the proteins involved, we found strong functional relationships within each cluster. In addition, the domains tended to be involved in protein-protein interaction and are able to function as independent structural units. A particularly striking example was the human Jak-STAT signalling pathway which makes use of a set of domains in a range of orders and orientations to provide nuanced signaling functionality. This illustrated the importance of functional and structural constraints (or lack thereof) on domain organisation.
Ventura, Bruna V; Wang, Li; Ali, Shazia F; Koch, Douglas D; Weikert, Mitchell P
2015-08-01
To evaluate and compare the performance of a point-source color light-emitting diode (LED)-based topographer (color-LED) in measuring anterior corneal power and aberrations with that of a Placido-disk topographer and a combined Placido and dual Scheimpflug device. Cullen Eye Institute, Baylor College of Medicine, Houston, Texas USA. Retrospective observational case series. Normal eyes and post-refractive-surgery eyes were consecutively measured using color-LED, Placido, and dual-Scheimpflug devices. The main outcome measures were anterior corneal power, astigmatism, and higher-order aberrations (HOAs) (6.0 mm pupil), which were compared using the t test. There were no statistically significant differences in corneal power measurements in normal and post-refractive surgery eyes and in astigmatism magnitude in post-refractive surgery eyes between the color-LED device and Placido or dual Scheimpflug devices (all P > .05). In normal eyes, there were no statistically significant differences in 3rd-order coma and 4th-order spherical aberration between the color-LED and Placido devices and in HOA root mean square, 3rd-order coma, 3rd-order trefoil, 4th-order spherical aberration, and 4th-order secondary astigmatism between the color-LED and dual Scheimpflug devices (all P > .05). In post-refractive surgery eyes, the color-LED device agreed with the Placido and dual-Scheimpflug devices regarding 3rd-order coma and 4th-order spherical aberration (all P > .05). In normal and post-refractive surgery eyes, all 3 devices were comparable with respect to corneal power. The agreement in corneal aberrations varied. Drs. Wang, Koch, and Weikert are consultants to Ziemer Ophthalmic Systems AG. Dr. Koch is a consultant to Abbott Medical Optics, Inc., Alcon Surgical, Inc., and i-Optics Corp. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Eason, Grace Teresa
The purpose of this quasi-experimental study was to determine the effect a higher-order questioning strategy (Bloom, 1956) had on undergraduate non-science majors' attitudes toward the environment and their achievement in an introductory environmental science course, EDS 1032, "Survey of Science 2: Life Science," which was offered during the Spring 2000 term. Students from both treatment and control groups (N = 63), which were determined using intact classes, participated in eight cooperative group activities based on the Biological Sciences Curriculum Studies (BSCS) 5E model (Bybee, 1993). The treatment group received a higher-order questioning method combined with the BSCS 5E model. The control group received a lower-order questioning method, combined with the BSCS 5E model. Two instruments were used to measure students' attitude and achievement changes. The Ecology Issue Attitude (EIA) survey (Schindler, 1995) and a comprehensive environmental science final exam. Kolb's Learning Style Inventory (KLSI, 1985) was used to measure students' learning style type. After a 15-week treatment period, results were analyzed using MANCOVA. The overall MANCOVA model used to test the statistical difference between the collective influences of the independent variables on the three dependent variables simultaneously was found to be not significant at alpha = .05. This differs from findings of previous studies in which higher-order questioning techniques had a significant effect on student achievement (King 1989 & 1992; Blosser, 1991; Redfield and Rousseau, 1981; Gall 1970). At the risk of inflated Type I and Type II error rates, separate univariate analyses were performed. However, none of the research factors, when examined collectively or separately, made any significant contribution to explaining the variability in EIA attitude, EIA achievement, and comprehensive environmental science final examination scores. Nevertheless, anecdotal evidence from student's self-reported behavior changes indicated favorable responses to an increased awareness of and positive action toward the environment.
REANALYSIS OF F-STATISTIC GRAVITATIONAL-WAVE SEARCHES WITH THE HIGHER CRITICISM STATISTIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, M. F.; Melatos, A.; Delaigle, A.
2013-04-01
We propose a new method of gravitational-wave detection using a modified form of higher criticism, a statistical technique introduced by Donoho and Jin. Higher criticism is designed to detect a group of sparse, weak sources, none of which are strong enough to be reliably estimated or detected individually. We apply higher criticism as a second-pass method to synthetic F-statistic and C-statistic data for a monochromatic periodic source in a binary system and quantify the improvement relative to the first-pass methods. We find that higher criticism on C-statistic data is more sensitive by {approx}6% than the C-statistic alone under optimal conditionsmore » (i.e., binary orbit known exactly) and the relative advantage increases as the error in the orbital parameters increases. Higher criticism is robust even when the source is not monochromatic (e.g., phase-wandering in an accreting system). Applying higher criticism to a phase-wandering source over multiple time intervals gives a {approx}> 30% increase in detectability with few assumptions about the frequency evolution. By contrast, in all-sky searches for unknown periodic sources, which are dominated by the brightest source, second-pass higher criticism does not provide any benefits over a first-pass search.« less
Mathematical Methods for Physics and Engineering Third Edition Paperback Set
NASA Astrophysics Data System (ADS)
Riley, Ken F.; Hobson, Mike P.; Bence, Stephen J.
2006-06-01
Prefaces; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics; Index.
Cosmic variance in inflation with two light scalars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie
We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light ( m ∼< 0.1 H ), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always hasmore » the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.« less
A Joint Optimization Criterion for Blind DS-CDMA Detection
NASA Astrophysics Data System (ADS)
Durán-Díaz, Iván; Cruces-Alvarez, Sergio A.
2006-12-01
This paper addresses the problem of the blind detection of a desired user in an asynchronous DS-CDMA communications system with multipath propagation channels. Starting from the inverse filter criterion introduced by Tugnait and Li in 2001, we propose to tackle the problem in the context of the blind signal extraction methods for ICA. In order to improve the performance of the detector, we present a criterion based on the joint optimization of several higher-order statistics of the outputs. An algorithm that optimizes the proposed criterion is described, and its improved performance and robustness with respect to the near-far problem are corroborated through simulations. Additionally, a simulation using measurements on a real software-radio platform at 5 GHz has also been performed.
Estimating order statistics of network degrees
NASA Astrophysics Data System (ADS)
Chu, J.; Nadarajah, S.
2018-01-01
We model the order statistics of network degrees of big data sets by a range of generalised beta distributions. A three parameter beta distribution due to Libby and Novick (1982) is shown to give the best overall fit for at least four big data sets. The fit of this distribution is significantly better than the fit suggested by Olhede and Wolfe (2012) across the whole range of order statistics for all four data sets.
A statistical approach to the brittle fracture of a multi-phase solid
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lua, Y. I.; Belytschko, T.
1991-01-01
A stochastic damage model is proposed to quantify the inherent statistical distribution of the fracture toughness of a brittle, multi-phase solid. The model, based on the macrocrack-microcrack interaction, incorporates uncertainties in locations and orientations of microcracks. Due to the high concentration of microcracks near the macro-tip, a higher order analysis based on traction boundary integral equations is formulated first for an arbitrary array of cracks. The effects of uncertainties in locations and orientations of microcracks at a macro-tip are analyzed quantitatively by using the boundary integral equations method in conjunction with the computer simulation of the random microcrack array. The short range interactions resulting from surrounding microcracks closet to the main crack tip are investigated. The effects of microcrack density parameter are also explored in the present study. The validity of the present model is demonstrated by comparing its statistical output with the Neville distribution function, which gives correct fits to sets of experimental data from multi-phase solids.
Baskar, Gurunathan; Sathya, Shree Rajesh K Lakshmi Jai; Jinnah, Riswana Begum; Sahadevan, Renganathan
2011-01-01
Response surface methodology was employed to optimize the concentration of four important cultivation media components such as cottonseed oil cake, glucose, NH4Cl, and MgSO4 for maximum medicinal polysaccharide yield by Lingzhi or Reishi medicinal mushroom, Ganoderma lucidum MTCC 1039 in submerged culture. The second-order polynomial model describing the relationship between media components and polysaccharide yield was fitted in coded units of the variables. The higher value of the coefficient of determination (R2 = 0.953) justified an excellent correlation between media components and polysaccharide yield, and the model fitted well with high statistical reliability and significance. The predicted optimum concentration of the media components was 3.0% cottonseed oil cake, 3.0% glucose, 0.15% NH4Cl, and 0.045% MgSO4, with the maximum predicted polysaccharide yield of 819.76 mg/L. The experimental polysaccharide yield at the predicted optimum media components was 854.29 mg/L, which was 4.22% higher than the predicted yield.
Ea, Vuthy; Sexton, Tom; Gostan, Thierry; Herviou, Laurie; Baudement, Marie-Odile; Zhang, Yunzhe; Berlivet, Soizik; Le Lay-Taha, Marie-Noëlle; Cathala, Guy; Lesne, Annick; Victor, Jean-Marc; Fan, Yuhong; Cavalli, Giacomo; Forné, Thierry
2015-08-15
In higher eukaryotes, the genome is partitioned into large "Topologically Associating Domains" (TADs) in which the chromatin displays favoured long-range contacts. While a crumpled/fractal globule organization has received experimental supports at higher-order levels, the organization principles that govern chromatin dynamics within these TADs remain unclear. Using simple polymer models, we previously showed that, in mouse liver cells, gene-rich domains tend to adopt a statistical helix shape when no significant locus-specific interaction takes place. Here, we use data from diverse 3C-derived methods to explore chromatin dynamics within mouse and Drosophila TADs. In mouse Embryonic Stem Cells (mESC), that possess large TADs (median size of 840 kb), we show that the statistical helix model, but not globule models, is relevant not only in gene-rich TADs, but also in gene-poor and gene-desert TADs. Interestingly, this statistical helix organization is considerably relaxed in mESC compared to liver cells, indicating that the impact of the constraints responsible for this organization is weaker in pluripotent cells. Finally, depletion of histone H1 in mESC alters local chromatin flexibility but not the statistical helix organization. In Drosophila, which possesses TADs of smaller sizes (median size of 70 kb), we show that, while chromatin compaction and flexibility are finely tuned according to the epigenetic landscape, chromatin dynamics within TADs is generally compatible with an unconstrained polymer configuration. Models issued from polymer physics can accurately describe the organization principles governing chromatin dynamics in both mouse and Drosophila TADs. However, constraints applied on this dynamics within mammalian TADs have a peculiar impact resulting in a statistical helix organization.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas. PMID:29163117
NASA Astrophysics Data System (ADS)
Meshgin, Pania
2011-12-01
This research focuses on two important subjects: (1) Characterization of heterogeneous microstructure of multi-phase composites and the effect of microstructural features on effective properties of the material. (2) Utilizations of phase change materials and recycled rubber particles from waste tires to improve thermal properties of insulation materials used in building envelopes. Spatial pattern of multi-phase and multidimensional internal structures of most composite materials are highly random. Quantitative description of the spatial distribution should be developed based on proper statistical models, which characterize the morphological features. For a composite material with multi-phases, the volume fraction of the phases as well as the morphological parameters of the phases have very strong influences on the effective property of the composite. These morphological parameters depend on the microstructure of each phase. This study intends to include the effect of higher order morphological details of the microstructure in the composite models. The higher order statistics, called two-point correlation functions characterize various behaviors of the composite at any two points in a stochastic field. Specifically, correlation functions of mosaic patterns are used in the study for characterizing transport properties of composite materials. One of the most effective methods to improve energy efficiency of buildings is to enhance thermal properties of insulation materials. The idea of using phase change materials and recycled rubber particles such as scrap tires in insulation materials for building envelopes has been studied.
Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.
Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa
2017-03-01
Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.
Lima, C S; Barbosa, D; Ramos, J; Tavares, A; Monteiro, L; Carvalho, L
2008-01-01
This paper presents a system to support medical diagnosis and detection of abnormal lesions by processing capsule endoscopic images. Endoscopic images possess rich information expressed by texture. Texture information can be efficiently extracted from medium scales of the wavelet transform. The set of features proposed in this paper to code textural information is named color wavelet covariance (CWC). CWC coefficients are based on the covariances of second order textural measures, an optimum subset of them is proposed. Third and forth order moments are added to cope with distributions that tend to become non-Gaussian, especially in some pathological cases. The proposed approach is supported by a classifier based on radial basis functions procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data containing 6 full endoscopic exams and reached 95% specificity and 93% sensitivity.
NASA Astrophysics Data System (ADS)
Sanatkhani, Soroosh; Menon, Prahlad G.
2018-03-01
Left atrial appendage (LAA) is the source of 91% of the thrombi in patients with atrial arrhythmias ( 2.3 million US adults), turning this region into a potential threat for stroke. LAA geometries have been clinically categorized into four appearance groups viz. Cauliflower, Cactus, Chicken-Wing and WindSock, based on visual appearance in 3D volume visualizations of contrast-enhanced computed tomography (CT) imaging, and have further been correlated with stroke risk by considering clinical mortality statistics. However, such classification from visual appearance is limited by human subjectivity and is not sophisticated enough to address all the characteristics of the geometries. Quantification of LAA geometry metrics can reveal a more repeatable and reliable estimate on the characteristics of the LAA which correspond with stasis risk, and in-turn cardioembolic risk. We present an approach to quantify the appearance of the LAA in patients in atrial fibrillation (AF) using a weighted set of baseline eigen-modes of LAA appearance variation, as a means to objectify classification of patient-specific LAAs into the four accepted clinical appearance groups. Clinical images of 16 patients (4 per LAA appearance category) with atrial fibrillation (AF) were identified and visualized as volume images. All the volume images were rigidly reoriented in order to be spatially co-registered, normalized in terms of intensity, resampled and finally reshaped appropriately to carry out principal component analysis (PCA), in order to parametrize the LAA region's appearance based on principal components (PCs/eigen mode) of greyscale appearance, generating 16 eigen-modes of appearance variation. Our pilot studies show that the most dominant LAA appearance (i.e. reconstructable using the fewest eigen-modes) resembles the Chicken-Wing class, which is known to have the lowest stroke risk per clinical mortality statistics. Our findings indicate the possibility that LAA geometries with high risk of stroke are higher-order statistical variants of underlying lower risk shapes.
A modified weighted function method for parameter estimation of Pearson type three distribution
NASA Astrophysics Data System (ADS)
Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo
2014-04-01
In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.
Assessment of synthetic image fidelity
NASA Astrophysics Data System (ADS)
Mitchell, Kevin D.; Moorhead, Ian R.; Gilmore, Marilyn A.; Watson, Graham H.; Thomson, Mitch; Yates, T.; Troscianko, Tomasz; Tolhurst, David J.
2000-07-01
Computer generated imagery is increasingly used for a wide variety of purposes ranging from computer games to flight simulators to camouflage and sensor assessment. The fidelity required for this imagery is dependent on the anticipated use - for example when used for camouflage design it must be physically correct spectrally and spatially. The rendering techniques used will also depend upon the waveband being simulated, spatial resolution of the sensor and the required frame rate. Rendering of natural outdoor scenes is particularly demanding, because of the statistical variation in materials and illumination, atmospheric effects and the complex geometric structures of objects such as trees. The accuracy of the simulated imagery has tended to be assessed subjectively in the past. First and second order statistics do not capture many of the essential characteristics of natural scenes. Direct pixel comparison would impose an unachievable demand on the synthetic imagery. For many applications, such as camouflage design, it is important that nay metrics used will work in both visible and infrared wavebands. We are investigating a variety of different methods of comparing real and synthetic imagery and comparing synthetic imagery rendered to different levels of fidelity. These techniques will include neural networks (ICA), higher order statistics and models of human contrast perception. This paper will present an overview of the analyses we have carried out and some initial results along with some preliminary conclusions regarding the fidelity of synthetic imagery.
Gao, Xin; Niu, Cuijuan; Chen, Yushun; Yin, Xuwang
2014-04-01
Understanding the effects of watershed land uses (e.g., agriculture, urban industry) on stream ecological conditions is important for the management of large river basins. A total of 41 and 56 stream sites (from first to fourth order) that were under a gradient of watershed land uses were monitored in 2009 and 2010, respectively, in the Liao River Basin, Northeast China. The monitoring results showed that a total of 192 taxa belonging to four phyla, seven classes, 21 orders and 91 families were identified. The composition of macroinvertebrate community in the Liao River Basin was dominated by aquatic insect taxa (Ephemeroptera and Diptera), Oligochaeta and Molluscs. The functional feeding group GC (Gatherer/Collector) was dominant in the whole basin. Statistical results showed that sites with less watershed impacts (lower order sites) were characterized by higher current velocity and habitat score, more sensitive taxa (e.g., Ephemeroptera), and the substrate was dominated by high percentage of cobble and pebble. The sites with more impacts from agriculture and urban industry (higher order sites) were characterized by higher biochemical (BOD5) and chemical oxygen demand (COD), more tolerant taxa (e.g., Chironominae), and the substrate was dominated by silt and sand. Agriculture and urban-industry activities have reduced habitat condition, increased organic pollutants, reduced macroinvertebrate abundance, diversity, and sensitive taxa in streams of the lower Liao River Basin. Restoration of degraded habitat condition and control of watershed organic pollutants could be potential management priorities for the Basin.
Heavy Metal Content in Chilean Fish Related to Habitat Use, Tissue Type and River of Origin.
Copaja, S V; Pérez, C A; Vega-Retter, C; Véliz, D
2017-12-01
In this study, we analyze the concentration of ten metals in two freshwater fish-the benthic catfish Trichomycterus areolatus and the limnetic silverside Basilichthys microlepidotus-in order to detect possible accumulation differences related to fish habitat (benthic or pelagic), tissue type (gill, liver and muscle), and the river of origin (four different rivers) in central Chile. The MANOVA performed with all variables and metals, revealed independent effects of fish, tissue and river. In the case of the fish factor, Cu, Cr, Mo and Zn showed statistically higher concentrations in catfish compared with silverside for all tissues and in all rivers (p < 0.05). In the case of the tissue factor, Al, Cr, Fe and Mn had statistically higher concentrations in liver and gills than in muscle (p < 0.05). For the river effect, the analysis showed higher concentrations of Cr, Mn and Pb in the Cogoti river and the lower concentrations in the Recoleta river. These results suggest that not all metals have the same pattern of accumulation; however, some metals tend to accumulate more in readily catfish, probably due to their benthic habit, and in liver and gill tissue, probably as a result of accumulation from food sources and respiration.
Slade, Stephen G; Durrie, Daniel S; Binder, Perry S
2009-06-01
To determine the differences in the visual results, pain response, biomechanical effect, quality of vision, and higher-order aberrations, among other parameters, in eyes undergoing either photorefractive keratectomy (PRK) or thin-flap LASIK/sub-Bowman keratomileusis (SBK; intended flap thickness of +/-100 microm and 8.5-mm diameter) at 1, 3, and 6 months after surgery. A contralateral eye pilot study. Fifty patients (100 eyes) were enrolled at 2 sites. The mean preoperative spherical refraction was -3.66 diopters (D) and the mean cylinder was -0.66 D for all eyes. Eyes in the PRK group underwent 8.5-mm ethanol-assisted PRK, whereas in eyes in the SBK group, an 8.5-mm, (intended) 100-microm flap was created with a 60-kHz IntraLase femtosecond laser (Advanced Medical Optics, Santa Ana, CA). All eyes underwent a customized laser ablation using an Alcon LADARVision 4000 CustomCornea excimer laser (Alcon Laboratories, Fort Worth, TX). Preoperative and postoperative tests included best spectacle-corrected visual acuity, uncorrected visual acuity (UCVA), corneal topography, wavefront aberrometry, retinal image quality, and contrast sensitivity. Patients completed subjective questionnaires at each visit. One- and 3-month UCVA results showed a statistically significant difference: SBK, 88% 20/20 or better vs. 48% 20/20 or better for PRK. At 6 months, UCVA was 94% 20/20 or better for PRK and 92% for SBK. At 1 and 3 months, the SBK group had lower higher-order aberrations (coma and spherical aberration; P
Blackledge, Matthew D; Tunariu, Nina; Orton, Matthew R; Padhani, Anwar R; Collins, David J; Leach, Martin O; Koh, Dow-Mu
2016-01-01
Quantitative whole-body diffusion-weighted MRI (WB-DWI) is now possible using semi-automatic segmentation techniques. The method enables whole-body estimates of global Apparent Diffusion Coefficient (gADC) and total Diffusion Volume (tDV), both of which have demonstrated considerable utility for assessing treatment response in patients with bone metastases from primary prostate and breast cancers. Here we investigate the agreement (inter-observer repeatability) between two radiologists in their definition of Volumes Of Interest (VOIs) and subsequent assessment of tDV and gADC on an exploratory patient cohort of nine. Furthermore, each radiologist was asked to repeat his or her measurements on the same patient data sets one month later to identify the intra-observer repeatability of the technique. Using a Markov Chain Monte Carlo (MCMC) estimation method provided full posterior probabilities of repeatability measures along with maximum a-posteriori values and 95% confidence intervals. Our estimates of the inter-observer Intraclass Correlation Coefficient (ICCinter) for log-tDV and median gADC were 1.00 (0.97-1.00) and 0.99 (0.89-0.99) respectively, indicating excellent observer agreement for these metrics. Mean gADC values were found to have ICCinter = 0.97 (0.81-0.99) indicating a slight sensitivity to outliers in the derived distributions of gADC. Of the higher order gADC statistics, skewness was demonstrated to have good inter-user agreement with ICCinter = 0.99 (0.86-1.00), whereas gADC variance and kurtosis performed relatively poorly: 0.89 (0.39-0.97) and 0.96 (0.69-0.99) respectively. Estimates of intra-observer repeatability (ICCintra) demonstrated similar results: 0.99 (0.95-1.00) for log-tDV, 0.98 (0.89-0.99) and 0.97 (0.83-0.99) for median and mean gADC respectively, 0.64 (0.25-0.88) for gADC variance, 0.85 (0.57-0.95) for gADC skewness and 0.85 (0.57-0.95) for gADC kurtosis. Further investigation of two anomalous patient cases revealed that a very small proportion of voxels with outlying gADC values lead to instability in higher order gADC statistics. We therefore conclude that estimates of median/mean gADC and tumour volume demonstrate excellent inter- and intra-observer repeatability whilst higher order statistics of gADC should be used with caution when ascribing significance to clinical changes.
f-lacunary statistical convergence of order (α, β)
NASA Astrophysics Data System (ADS)
Sengul, Hacer; Isik, Mahmut; Et, Mikail
2017-09-01
The main purpose of this paper is to introduce the concepts of f-lacunary statistical convergence of order (α, β) and strong f-lacunary summability of order (α, β) of sequences of real numbers for 0 <α ≤ β ≤ 1, where f is an unbounded modulus.
NASA Astrophysics Data System (ADS)
Qi, Di
Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are applied in the training phase for calibrating model errors to achieve optimal imperfect model parameters; and total statistical energy dynamics are introduced to improve the model sensitivity in the prediction phase especially when strong external perturbations are exerted. The validity of reduced-order models for predicting statistical responses and intermittency is demonstrated on a series of instructive models with increasing complexity, including the stochastic triad model, the Lorenz '96 model, and models for barotropic and baroclinic turbulence. The skillful low-order modeling methods developed here should also be useful for other applications such as efficient algorithms for data assimilation.
Koltun, G.F.
2014-01-01
This report presents the results of a study to assess potential water availability from the Charles Mill, Clendening, Piedmont, Pleasant Hill, Senecaville, and Wills Creek Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data (where available) and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for Charles Mill, Clendening, and Piedmont Lakes to 74 calendar years for Pleasant Hill, Senecaville, and Wills Creek Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate typically increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.
DOT National Transportation Integrated Search
1998-01-01
These statistics are broken down for each country into four sets of tables: I. State of the orderbook, II. Ships completed, III. New orders, and IV. Specifications in compensation tonnage. Statistics for the United States and the United Kingdom can b...
Response properties of ON-OFF retinal ganglion cells to high-order stimulus statistics.
Xiao, Lei; Gong, Han-Yan; Gong, Hai-Qing; Liang, Pei-Ji; Zhang, Pu-Ming
2014-10-17
The visual stimulus statistics are the fundamental parameters to provide the reference for studying visual coding rules. In this study, the multi-electrode extracellular recording experiments were designed and implemented on bullfrog retinal ganglion cells to explore the neural response properties to the changes in stimulus statistics. The changes in low-order stimulus statistics, such as intensity and contrast, were clearly reflected in the neuronal firing rate. However, it was difficult to distinguish the changes in high-order statistics, such as skewness and kurtosis, only based on the neuronal firing rate. The neuronal temporal filtering and sensitivity characteristics were further analyzed. We observed that the peak-to-peak amplitude of the temporal filter and the neuronal sensitivity, which were obtained from either neuronal ON spikes or OFF spikes, could exhibit significant changes when the high-order stimulus statistics were changed. These results indicate that in the retina, the neuronal response properties may be reliable and powerful in carrying some complex and subtle visual information. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The integrated bispectrum and beyond
NASA Astrophysics Data System (ADS)
Munshi, Dipak; Coles, Peter
2017-02-01
The position-dependent power spectrum has been recently proposed as a descriptor of gravitationally induced non-Gaussianity in galaxy clustering, as it is sensitive to the "soft limit" of the bispectrum (i.e. when one of the wave number tends to zero). We generalise this concept to higher order and clarify their relationship to other known statistics such as the skew-spectrum, the kurt-spectra and their real-space counterparts the cumulants correlators. Using the Hierarchical Ansatz (HA) as a toy model for the higher order correlation hierarchy, we show how in the soft limit, polyspectra at a given order can be identified with lower order polyspectra with the same geometrical dependence but with renormalised amplitudes expressed in terms of amplitudes of the original polyspectra. We extend the concept of position-dependent bispectrum to bispectrum of the divergence of the velocity field Θ and mixed multispectra involving δ and Θ in the 3D perturbative regime. To quantify the effects of transients in numerical simulations, we also present results for lowest order in Lagrangian perturbation theory (LPT) or the Zel'dovich approximation (ZA). Finally, we discuss how to extend the position-dependent spectrum concept to encompass cross-spectra. And finally study the application of this concept to two dimensions (2D), for projected galaxy maps, convergence κ maps from weak-lensing surveys or maps of CMB secondaries e.g. the frequency cleaned y-parameter maps of thermal Sunyaev-Zel'dovich (tSZ) effect from CMB surveys.
Orbital State Uncertainty Realism
NASA Astrophysics Data System (ADS)
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten times as long* as the latter. The filter correction step also furnishes a statistically rigorous *prediction error* which appears in the likelihood ratios for scoring the association of one report or observation to another. Thus, the new filter can be used to support multi-target tracking within a general multiple hypothesis tracking framework. Additionally, the new distribution admits a distance metric which extends the classical Mahalanobis distance (chi^2 statistic). This metric provides a test for statistical significance and facilitates single-frame data association methods with the potential to easily extend the covariance-based track association algorithm of Hill, Sabol, and Alfriend. The filtering, data fusion, and association methods using the new class of orbital state PDFs are shown to be mathematically tractable and operationally viable.
Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto
2014-01-01
Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model. PMID:24634645
Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto
2014-01-01
Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.
Pump RIN-induced impairments in unrepeatered transmission systems using distributed Raman amplifier.
Cheng, Jingchi; Tang, Ming; Lau, Alan Pak Tao; Lu, Chao; Wang, Liang; Dong, Zhenhua; Bilal, Syed Muhammad; Fu, Songnian; Shum, Perry Ping; Liu, Deming
2015-05-04
High spectral efficiency modulation format based unrepeatered transmission systems using distributed Raman amplifier (DRA) have attracted much attention recently. To enhance the reach and optimize system performance, careful design of DRA is required based on the analysis of various types of impairments and their balance. In this paper, we study various pump RIN induced distortions on high spectral efficiency modulation formats. The vector theory of both 1st and higher-order stimulated Raman scattering (SRS) effect using Jones-matrix formalism is presented. The pump RIN will induce three types of distortion on high spectral efficiency signals: intensity noise stemming from SRS, phase noise stemming from cross phase modulation (XPM), and polarization crosstalk stemming from cross polarization modulation (XPolM). An analytical model for the statistical property of relative phase noise (RPN) in higher order DRA without dealing with complex vector theory is derived. The impact of pump RIN induced impairments are analyzed in polarization-multiplexed (PM)-QPSK and PM-16QAM-based unrepeatered systems simulations using 1st, 2nd and 3rd-order forward pumped Raman amplifier. It is shown that at realistic RIN levels, negligible impairments will be induced to PM-QPSK signals in 1st and 2nd order DRA, while non-negligible impairments will occur in 3rd order case. PM-16QAM signals suffer more penalties compared to PM-QPSK with the same on-off gain where both 2nd and 3rd order DRA will cause non-negligible performance degradations. We also investigate the performance of digital signal processing (DSP) algorithms to mitigate such impairments.
Socioeconomic disadvantage and schizophrenia in migrants under mental health detention orders.
Bulla, Jan; Hoffmann, Klaus; Querengässer, Jan; Ross, Thomas
2017-09-01
Migrants with mental hospital orders according to section 63 of the German criminal code are overrepresented in relation to their numbers in the general population. Subgroups originating from certain world regions are diagnosed with schizophrenia at a much higher rate than others. In the present literature, there is a strong evidence for a substantial correlation between migration, social disadvantage and the prevalence of schizophrenia. This study investigates the relationship between countries of origin, the risk of becoming a forensic patient and the proportion of schizophrenia spectrum disorders. Data from a comprehensive evaluation tool of forensic inpatients in the German federal state of Baden-Württemberg (FoDoBa) were compared with population statistics and correlated with the Human Development Index (HDI) and Multidimensional Poverty Index (MPI). For residents with migration background, the risk ratio to receive a mental hospital order is 1.3 in comparison to non-migrants. There was a highly significant correlation between the HDI of the country of origin and the risk ratio for detention in a forensic psychiatric hospital. The proportion of schizophrenia diagnoses also correlated significantly with the HDI. In contrast, the MPI country rankings were not associated with schizophrenia diagnoses. Two lines of explanations are discussed: first, higher prevalence of schizophrenia in migrants originating from low-income countries, and second, a specific bias in court rulings with regard to involuntary forensic treatment orders for these migrant groups.
Ankle plantarflexion strength in rearfoot and forefoot runners: a novel clusteranalytic approach.
Liebl, Dominik; Willwacher, Steffen; Hamill, Joseph; Brüggemann, Gert-Peter
2014-06-01
The purpose of the present study was to test for differences in ankle plantarflexion strengths of habitually rearfoot and forefoot runners. In order to approach this issue, we revisit the problem of classifying different footfall patterns in human runners. A dataset of 119 subjects running shod and barefoot (speed 3.5m/s) was analyzed. The footfall patterns were clustered by a novel statistical approach, which is motivated by advances in the statistical literature on functional data analysis. We explain the novel statistical approach in detail and compare it to the classically used strike index of Cavanagh and Lafortune (1980). The two groups found by the new cluster approach are well interpretable as a forefoot and a rearfoot footfall groups. The subsequent comparison study of the clustered subjects reveals that runners with a forefoot footfall pattern are capable of producing significantly higher joint moments in a maximum voluntary contraction (MVC) of their ankle plantarflexor muscles tendon units; difference in means: 0.28Nm/kg. This effect remains significant after controlling for an additional gender effect and for differences in training levels. Our analysis confirms the hypothesis that forefoot runners have a higher mean MVC plantarflexion strength than rearfoot runners. Furthermore, we demonstrate that our proposed stochastic cluster analysis provides a robust and useful framework for clustering foot strikes. Copyright © 2014 Elsevier B.V. All rights reserved.
Lago-Peñas, Carlos; Lago-Ballesteros, Joaquín; Dellal, Alexandre; Gómez, Maite
2010-01-01
The aim of the present study was to analyze men’s football competitions, trying to identify which game-related statistics allow to discriminate winning, drawing and losing teams. The sample used corresponded to 380 games from the 2008-2009 season of the Spanish Men’s Professional League. The game-related statistics gathered were: total shots, shots on goal, effectiveness, assists, crosses, offsides commited and received, corners, ball possession, crosses against, fouls committed and received, corners against, yellow and red cards, and venue. An univariate (t-test) and multivariate (discriminant) analysis of data was done. The results showed that winning teams had averages that were significantly higher for the following game statistics: total shots (p < 0.001), shots on goal (p < 0.01), effectiveness (p < 0.01), assists (p < 0.01), offsides committed (p < 0.01) and crosses against (p < 0.01). Losing teams had significantly higher averages in the variable crosses (p < 0.01), offsides received (p < 0. 01) and red cards (p < 0.01). Discriminant analysis allowed to conclude the following: the variables that discriminate between winning, drawing and losing teams were the total shots, shots on goal, crosses, crosses against, ball possession and venue. Coaches and players should be aware for these different profiles in order to increase knowledge about game cognitive and motor solicitation and, therefore, to evaluate specificity at the time of practice and game planning. Key points This paper increases the knowledge about soccer match analysis. Give normative values to establish practice and match objectives. Give applications ideas to connect research with coaches’ practice. PMID:24149698
Numerical solution for weight reduction model due to health campaigns in Spain
NASA Astrophysics Data System (ADS)
Mohammed, Maha A.; Noor, Noor Fadiya Mohd; Siri, Zailan; Ibrahim, Adriana Irawati Nur
2015-10-01
Transition model between three subpopulations based on Body Mass Index of Valencia community in Spain is considered. No changes in population nutritional habits and public health strategies on weight reduction until 2030 are assumed. The system of ordinary differential equations is solved using Runge-Kutta method of higher order. The numerical results obtained are compared with the predicted values of subpopulation proportion based on statistical estimation in 2013, 2015 and 2030. Relative approximate error is calculated. The consistency of the Runge-Kutta method in solving the model is discussed.
NASA Astrophysics Data System (ADS)
Kerr, Laura T.; Adams, Aine; O'Dea, Shirley; Domijan, Katarina; Cullen, Ivor; Hennelly, Bryan M.
2014-05-01
Raman microspectroscopy can be applied to the urinary bladder for highly accurate classification and diagnosis of bladder cancer. This technique can be applied in vitro to bladder epithelial cells obtained from urine cytology or in vivo as an optical biopsy" to provide results in real-time with higher sensitivity and specificity than current clinical methods. However, there exists a high degree of variability across experimental parameters which need to be standardised before this technique can be utilized in an everyday clinical environment. In this study, we investigate different laser wavelengths (473 nm and 532 nm), sample substrates (glass, fused silica and calcium fluoride) and multivariate statistical methods in order to gain insight into how these various experimental parameters impact on the sensitivity and specificity of Raman cytology.
Development of superalloys by powder metallurgy for use at 1000 - 1400 F
NASA Technical Reports Server (NTRS)
Calhoun, C. D.
1971-01-01
Consolidated powders of four nickel-base superalloys were studied for potential application as compressor and turbine discs in jet engines. All of the alloys were based on the Rene' 95 chemistry. Three of these had variations in carbon and A12O3 contents, and the fourth alloy was chemically modified to a higher volume fraction. The A12O3 was added by preoxidation of the powders prior to extrusion. Various levels of four experimental factors (1) alloy composition, (2) grain size, (3) thermomechanical processing, and (4) room temperature deformation plus final age were evaluated by tensile and stress rupture testing at 1200 F. Various levels of the four factors were assumed in order to construct the statistically-designed experiment, but the actual levels investigated were established in preliminary studies that preceded the statistical process development study.
ERIC Educational Resources Information Center
Center for Education Statistics (ED/OERI), Washington, DC.
The Financial Statistics machine-readable data file (MRDF) is a subfile of the larger Higher Education General Information Survey (HEGIS). It contains basic financial statistics for over 3,000 institutions of higher education in the United States and its territories. The data are arranged sequentially by institution, with institutional…
Smith, Ben J; Zehle, Katharina; Bauman, Adrian E; Chau, Josephine; Hawkshaw, Barbara; Frost, Steven; Thomas, Margaret
2006-04-01
This study examined the use of quantitative methods in Australian health promotion research in order to identify methodological trends and priorities for strengthening the evidence base for health promotion. Australian health promotion articles were identified by hand searching publications from 1992-2002 in six journals: Health Promotion Journal of Australia, Australian and New Zealand journal of Public Health, Health Promotion International, Health Education Research, Health Education and Behavior and the American Journal of Health Promotion. The study designs and statistical methods used in articles presenting quantitative research were recorded. 591 (57.7%) of the 1,025 articles used quantitative methods. Cross-sectional designs were used in the majority (54.3%) of studies with pre- and post-test (14.6%) and post-test only (9.5%) the next most common designs. Bivariate statistical methods were used in 45.9% of papers, multivariate methods in 27.1% and simple numbers and proportions in 25.4%. Few studies used higher-level statistical techniques. While most studies used quantitative methods, the majority were descriptive in nature. The study designs and statistical methods used provided limited scope for demonstrating intervention effects or understanding the determinants of change.
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
Pérez-García, Débora; Brun-Gasca, Carme; Pérez-Jurado, Luis A; Mervis, Carolyn B
2017-03-01
To identify similarities and differences in the behavioral profile of children with Williams syndrome from Spain (n = 53) and the United States (n = 145), we asked parents of 6- to 14-year-olds with Williams syndrome to complete the Child Behavior Checklist 6-18. The distribution of raw scores was significantly higher for the Spanish sample than the American sample for all of the higher-order factors and half of both the empirically based and Diagnostic and Statistical Manual of Mental Disorders (DSM)-oriented scales. In contrast, analyses based on country-specific T-scores indicated that the distribution for the Spanish sample was significantly higher than for the American sample only on the Social Problems scale. No gender differences were found. Genetic and cultural influences on children's behavior and cultural influences on parental ratings of behavior are discussed.
Huang, Chun-Ta; Chuang, Yu-Chung; Tsai, Yi-Ju; Ko, Wen-Je; Yu, Chong-Jen
2016-01-01
Severe sepsis is a potentially deadly illness and always requires intensive care. Do-not-resuscitate (DNR) orders remain a debated issue in critical care and limited data exist about its impact on care of septic patients, particularly in East Asia. We sought to assess outcome of severe sepsis patients with regard to DNR status in Taiwan. A retrospective cohort study was conducted in intensive care units (ICUs) between 2008 and 2010. All severe sepsis patients were included for analysis. Primary outcome was association between DNR orders and ICU mortality. Volume of interventions was used as proxy indicator to indicate aggressiveness of care. Sixty-seven (9.4%) of 712 patients had DNR orders on ICU admission, and these patients were older and had higher disease severity compared with patients without DNR orders. Notably, DNR patients experienced high ICU mortality (90%). Multivariate analysis revealed that the presence of DNR orders was independently associated with ICU mortality (odds ratio: 6.13; 95% confidence interval: 2.66-14.10). In propensity score-matched cohort, ICU mortality rate (91%) in the DNR group was statistically higher than that (62%) in the non-DNR group (p <0.001). Regarding ICU interventions, arterial and central venous catheterization were more commonly used in DNR patients than in non-DNR patients. From the Asian perspective, septic patients placed on DNR orders on ICU admission had exceptionally high mortality. In contrast to Western reports, DNR patients received more ICU interventions, reflecting more aggressive approach to dealing with this patient population. The findings in some ways reflect differences between East and West cultures and suggest that DNR status is an important confounder in ICU studies involving severely septic patients.
Ventral and dorsal streams for choosing word order during sentence production
Thothathiri, Malathi; Rattinger, Michelle
2015-01-01
Proficient language use requires speakers to vary word order and choose between different ways of expressing the same meaning. Prior statistical associations between individual verbs and different word orders are known to influence speakers’ choices, but the underlying neural mechanisms are unknown. Here we show that distinct neural pathways are used for verbs with different statistical associations. We manipulated statistical experience by training participants in a language containing novel verbs and two alternative word orders (agent-before-patient, AP; patient-before-agent, PA). Some verbs appeared exclusively in AP, others exclusively in PA, and yet others in both orders. Subsequently, we used sparse sampling neuroimaging to examine the neural substrates as participants generated new sentences in the scanner. Behaviorally, participants showed an overall preference for AP order, but also increased PA order for verbs experienced in that order, reflecting statistical learning. Functional activation and connectivity analyses revealed distinct networks underlying the increased PA production. Verbs experienced in both orders during training preferentially recruited a ventral stream, indicating the use of conceptual processing for mapping meaning to word order. In contrast, verbs experienced solely in PA order recruited dorsal pathways, indicating the use of selective attention and sensorimotor integration for choosing words in the right order. These results show that the brain tracks the structural associations of individual verbs and that the same structural output may be achieved via ventral or dorsal streams, depending on the type of regularities in the input. PMID:26621706
Statistics Report on TEQSA Registered Higher Education Providers
ERIC Educational Resources Information Center
Australian Government Tertiary Education Quality and Standards Agency, 2015
2015-01-01
This statistics report provides a comprehensive snapshot of national statistics on all parts of the sector for the year 2013, by bringing together data collected directly by TEQSA with data sourced from the main higher education statistics collections managed by the Australian Government Department of Education and Training. The report provides…
Study of angular momentum variation due to entrance channel effect in heavy ion fusion reactions
NASA Astrophysics Data System (ADS)
Kumar, Ajay
2014-05-01
A systematic investigation of the properties of hot nuclei may be studied by detecting the evaporated particles. These emissions reflect the behavior of the nucleus at various stages of the deexcitation cascade. When the nucleus is formed by the collision of a heavy nucleus with a light particle, the statistical model has done a good job of predicting the distribution of evaporated particles when reasonable choices were made for the level densities and yrast lines. Comparison to more specific measurements could, of course, provide a more severe test of the model and enable one to identify the deviations from the statistical model as the signature of other effects not included in the model. Some papers have claimed that experimental evaporation spectra from heavy-ion fusion reactions at higher excitation energies and angular momenta are no longer consistent with the predictions of the standard statistical model. In order to confirm this prediction we have employed two systems, a mass-symmetric (31P+45Sc) and a mass-asymmetric channel (12C+64Zn), leading to the same compound nucleus 76Kr* at the excitation energy of 75 MeV. Neutron energy spectra of the asymmetric system (12C+64Zn) at different angles are well described by the statistical model predictions using the normal value of the level density parameter a = A/8 MeV-1. However, in the case of the symmetric system (31P+45Sc), the statistical model interpretation of the data requires the change in the value of a = A/10 MeV-1. The delayed evolution of the compound system in case of the symmetric 31P+45Sc system may lead to the formation of a temperature equilibrated dinuclear complex, which may be responsible for the neutron emission at higher temperature, while the protons and alpha particles are evaporated after neutron emission when the system is sufficiently cooled down and the higher g-values do not contribute in the formation of the compound nucleus for the symmetric entrance channel in case of charged particle emission.
AlOtaiba, Stephanie
2011-01-01
In this study, we examined the development of beginning writing skills in kindergarten children and the contribution of spelling and handwriting to these writing skills after accounting for early language, literacy, cognitive skills, and student characteristics. Two hundred and forty two children were given a battery of cognitive, oral language, reading, and writing measures. They exhibited a range of competency in spelling, handwriting, written expression, and in their ability to express ideas. Handwriting and spelling made statistically significant contributions to written expression, demonstrating the importance of these lower-order transcription skills to higher order text-generation skills from a very early age. The contributions of oral language and reading skills were not significant. Implications of these findings for writing development and instruction are addressed. PMID:23087544
Student Solution Manual for Mathematical Methods for Physics and Engineering Third Edition
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2006-03-01
Preface; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics.
Quadratic RK shooting solution for a environmental parameter prediction boundary value problem
NASA Astrophysics Data System (ADS)
Famelis, Ioannis Th.; Tsitouras, Ch.
2014-10-01
Using tools of Information Geometry, the minimum distance between two elements of a statistical manifold is defined by the corresponding geodesic, e.g. the minimum length curve that connects them. Such a curve, where the probability distribution functions in the case of our meteorological data are two parameter Weibull distributions, satisfies a 2nd order Boundary Value (BV) system. We study the numerical treatment of the resulting special quadratic form system using Shooting method. We compare the solutions of the problem when we employ a classical Singly Diagonally Implicit Runge Kutta (SDIRK) 4(3) pair of methods and a quadratic SDIRK 5(3) pair . Both pairs have the same computational costs whereas the second one attains higher order as it is specially constructed for quadratic problems.
Compressible Boundary Layer Predictions at High Reynolds Number using Hybrid LES/RANS Methods
NASA Technical Reports Server (NTRS)
Choi, Jung-Il; Edwards, Jack R.; Baurle, Robert A.
2008-01-01
Simulations of compressible boundary layer flow at three different Reynolds numbers (Re(sub delta) = 5.59x10(exp 4), 1.78x10(exp 5), and 1.58x10(exp 6) are performed using a hybrid large-eddy/Reynolds-averaged Navier-Stokes method. Variations in the recycling/rescaling method, the higher-order extension, the choice of primitive variables, the RANS/LES transition parameters, and the mesh resolution are considered in order to assess the model. The results indicate that the present model can provide good predictions of the mean flow properties and second-moment statistics of the boundary layers considered. Normalized Reynolds stresses in the outer layer are found to be independent of Reynolds number, similar to incompressible turbulent boundary layers.
On the bispectra of very massive tracers in the Effective Field Theory of Large-Scale Structure
Nadler, Ethan O.; Perko, Ashley; Senatore, Leonardo
2018-02-01
The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a consistent perturbative framework for describing the statistical distribution of cosmological large-scale structure. In a previous EFTofLSS calculation that involved the one-loop power spectra and tree-level bispectra, it was shown that the k-reach of the prediction for biased tracers is comparable for all investigated masses if suitable higher-derivative biases, which are less suppressed for more massive tracers, are added. However, it is possible that the non-linear biases grow faster with tracer mass than the linear bias, implying that loop contributions could be the leading correction to the bispectra. To check this,more » we include the one-loop contributions in a fit to numerical data in the limit of strongly enhanced higher-order biases. Here, we show that the resulting one-loop power spectra and higher-derivative plus leading one-loop bispectra fit the two- and three-point functions respectively up to k≃0.19 h Mpc -1 and ksime 0.14 h Mpc -1 at the percent level. We find that the higher-order bias coefficients are not strongly enhanced, and we argue that the gain in perturbative reach due to the leading one-loop contributions to the bispectra is relatively small. Thus, we conclude that higher-derivative biases provide the leading correction to the bispectra for tracers of a very wide range of masses.« less
On the bispectra of very massive tracers in the Effective Field Theory of Large-Scale Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadler, Ethan O.; Perko, Ashley; Senatore, Leonardo
The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a consistent perturbative framework for describing the statistical distribution of cosmological large-scale structure. In a previous EFTofLSS calculation that involved the one-loop power spectra and tree-level bispectra, it was shown that the k-reach of the prediction for biased tracers is comparable for all investigated masses if suitable higher-derivative biases, which are less suppressed for more massive tracers, are added. However, it is possible that the non-linear biases grow faster with tracer mass than the linear bias, implying that loop contributions could be the leading correction to the bispectra. To check this,more » we include the one-loop contributions in a fit to numerical data in the limit of strongly enhanced higher-order biases. Here, we show that the resulting one-loop power spectra and higher-derivative plus leading one-loop bispectra fit the two- and three-point functions respectively up to k≃0.19 h Mpc -1 and ksime 0.14 h Mpc -1 at the percent level. We find that the higher-order bias coefficients are not strongly enhanced, and we argue that the gain in perturbative reach due to the leading one-loop contributions to the bispectra is relatively small. Thus, we conclude that higher-derivative biases provide the leading correction to the bispectra for tracers of a very wide range of masses.« less
Deyhimi, Parviz; Hashemzadeh, Zahra
2014-04-01
Odontogenic keratocyst (OKC) is an aggressive cyst, and its recurrence rate is higher than that of other odontogenic cysts. Orthokeratinized odontogenic cyst (OOC) is less aggressive than OKC, but bears the probability of carcinomatous changes. In this study, we evaluated the expression and intensity of P53 and TGF-alpha in order to compare the biologic behavior or probable carcinomatous changes of these two cysts. In this cross-sectional study, 15 OKC and 15 OOC were stained immunohistochemically for P53 and TGF-alpha using the Novolink polymer method. Then, all slides were examined by an optical microscope with 400× magnification, and the stained cells in the basal and parabasal layers were counted. Finally, the results were analyzed by the Mann-Whitney and Wilcoxon tests (P-value<0.05). The difference between the expression of P53 and TGF alpha in the basal layer of OKC and OOC was not statistically significant (P-value>0.05), but the expression of P53 and TGF-alpha in the parabasal layer in OKC was statistically higher compared to OOC (P<0.05). Considering the known role of P53 and TGF-alpha in malignant changes and the higher expression of P53 and TGF-alpha in OKC compared to those in OOC, the probability of carcinomatous changes was higher in OKC than in OOC. Copyright © 2013 Elsevier GmbH. All rights reserved.
Characterization of the Body-to-Body Propagation Channel for Subjects during Sports Activities.
Mohamed, Marshed; Cheffena, Michael; Moldsvor, Arild
2018-02-18
Body-to-body wireless networks (BBWNs) have great potential to find applications in team sports activities among others. However, successful design of such systems requires great understanding of the communication channel as the movement of the body components causes time-varying shadowing and fading effects. In this study, we present results of the measurement campaign of BBWN during running and cycling activities. Among others, the results indicated the presence of good and bad states with each state following a specific distribution for the considered propagation scenarios. This motivated the development of two-state semi-Markov model, for simulation of the communication channels. The simulation model was validated using the available measurement data in terms of first and second order statistics and have shown good agreement. The first order statistics obtained from the simulation model as well as the measured results were then used to analyze the performance of the BBWNs channels under running and cycling activities in terms of capacity and outage probability. Cycling channels showed better performance than running, having higher channel capacity and lower outage probability, regardless of the speed of the subjects involved in the measurement campaign.
NASA Astrophysics Data System (ADS)
Launiainen, Samuli; Vesala, Timo; Mölder, Meelis; Mammarella, Ivan; Smolander, Sampo; Rannik, Üllar; Kolari, Pasi; Hari, Pertti; Lindroth, Anders; Katul, Gabriel G.
2007-11-01
Among the fundamental problems in canopy turbulence, particularly near the forest floor, remain the local diabatic effects and linkages between turbulent length scales and the canopy morphology. To progress on these problems, mean and higher order turbulence statistics are collected in a uniform pine forest across a wide range of atmospheric stability conditions using five 3-D anemometers in the subcanopy. The main novelties from this experiment are: (1) the agreement between second-order closure model results and measurements suggest that diabatic states in the layer above the canopy explain much of the modulations of the key velocity statistics inside the canopy except in the immediate vicinity of the trunk space and for very stable conditions. (2) The dimensionless turbulent kinetic energy in the trunk space is large due to a large longitudinal velocity variance but it is inactive and contributes little to momentum fluxes. (3) Near the floor layer, a logarithmic mean velocity profile is formed and vertical eddies are strongly suppressed modifying all power spectra. (4) A spectral peak in the vertical velocity near the ground commensurate with the trunk diameter emerged at a moderate element Reynolds number consistent with Strouhal instabilities describing wake production.
NASA Astrophysics Data System (ADS)
Böhm, Fabian; Grosse, Nicolai B.; Kolarczik, Mirco; Herzog, Bastian; Achtstein, Alexander; Owschimikow, Nina; Woggon, Ulrike
2017-09-01
Quantum state tomography and the reconstruction of the photon number distribution are techniques to extract the properties of a light field from measurements of its mean and fluctuations. These techniques are particularly useful when dealing with macroscopic or mesoscopic systems, where a description limited to the second order autocorrelation soon becomes inadequate. In particular, the emission of nonclassical light is expected from mesoscopic quantum dot systems strongly coupled to a cavity or in systems with large optical nonlinearities. We analyze the emission of a quantum dot-semiconductor optical amplifier system by quantifying the modifications of a femtosecond laser pulse propagating through the device. Using a balanced detection scheme in a self-heterodyning setup, we achieve precise measurements of the quadrature components and their fluctuations at the quantum noise limit1. We resolve the photon number distribution and the thermal-to-coherent evolution in the photon statistics of the emission. The interferometric detection achieves a high sensitivity in the few photon limit. From our data, we can also reconstruct the second order autocorrelation function with higher precision and time resolution compared with classical Hanbury Brown-Twiss experiments.
The Higher Education System in Israel: Statistical Abstract and Analysis.
ERIC Educational Resources Information Center
Herskovic, Shlomo
This edition of a statistical abstract published every few years on the higher education system in Israel presents the most recent data available through 1990-91. The data were gathered through the cooperation of the Central Bureau of Statistics and institutions of higher education. Chapter 1 presents a summary of principal findings covering the…
Sensitivity of an Antarctic Ice Sheet Model to Sub-Ice-Shelf Melting
NASA Astrophysics Data System (ADS)
Lipscomb, W. H.; Leguy, G.; Urban, N. M.; Berdahl, M.
2017-12-01
Theory and observations suggest that marine-based sectors of the Antarctic ice sheet could retreat rapidly under ocean warming and increased melting beneath ice shelves. Numerical models of marine ice sheets vary widely in sensitivity, depending on grid resolution and the parameterization of key processes (e.g., calving and hydrofracture). Here we study the sensitivity of the Antarctic ice sheet to ocean warming and sub-shelf melting in standalone simulations of the Community Ice Sheet Model (CISM). Melt rates either are prescribed based on observations and high-resolution ocean model output, or are derived from a plume model forced by idealized ocean temperature profiles. In CISM, we vary the model resolution (between 1 and 8 km), Stokes approximation (shallow-shelf, depth-integrated higher-order, or 3D higher-order) and calving scheme to create an ensemble of plausible responses to sub-shelf melting. This work supports a broader goal of building statistical and reduced models that can translate large-scale Earth-system model projections to changes in Antarctic ocean temperatures and ice sheet discharge, thus better quantifying uncertainty in Antarctic-sourced sea-level rise.
Understanding the latent structure of the emotional disorders in children and adolescents.
Trosper, Sarah E; Whitton, Sarah W; Brown, Timothy A; Pincus, Donna B
2012-05-01
Investigators are persistently aiming to clarify structural relationships among the emotional disorders in efforts to improve diagnostic classification. The high co-occurrence of anxiety and mood disorders, however, has led investigators to portray the current structure of anxiety and depression in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV, APA 2000) as more descriptive than empirical. This study assesses various structural models in a clinical sample of youths with emotional disorders. Three a priori factor models were tested, and the model that provided the best fit to the data showed the dimensions of anxiety and mood disorders to be hierarchically organized within a single, higher-order factor. This supports the prevailing view that the co-occurrence of anxiety and mood disorders in children is in part due to a common vulnerability (e.g., negative affectivity). Depression and generalized anxiety loaded more highly onto the higher-order factor than the other disorders, a possible explanation for the particularly high rates of comorbidity between the two. Implications for the taxonomy and treatment of mood and anxiety disorders for children and adolescents are discussed.
Comparison of friction produced by two types of orthodontic bracket protectors
Mendonça, Steyner de Lima; Praxedes Neto, Otávio José; de Oliveira, Patricia Teixeira; dos Santos, Patricia Bittencourt Dutra; Pinheiro, Fábio Henrique de Sá Leitão
2014-01-01
Introduction Fixed orthodontic appliances have been regarded as a common causative factor of oral lesions. To manage soft tissue discomfort, most orthodontists recommend using a small amount of utility wax over the brackets in order to alleviate trauma. This in vitro study aimed at evaluating friction generated by two types of bracket protectors (customized acetate protector [CAP] and temporary resin protector [TRP]) during the initial stages of orthodontic treatment. Methods An experimental model (test unit) was used to assess friction. In order to measure the friction produced in each test, the model was attached to a mechanical testing machine which simulated maxillary canines alignment. Intergroup comparison was carried out by one-way ANOVA with level of significance set at 5%. Results The friction presented by the TRP group was statistically higher than that of the control group at 6 mm. It was also higher than in the control and CAP groups in terms of maximum friction. Conclusion The customized acetate protector (CAP) demonstrated not to interfere in friction between the wire and the orthodontic bracket slot. PMID:24713564
Comparison of friction produced by two types of orthodontic bracket protectors.
de Lima Mendonça, Steyner; Praxedes Neto, Otávio José; de Oliveira, Patricia Teixeira; dos Santos, Patricia Bittencourt Dutra; de Sá Leitão Pinheiro, Fábio Henrique
2014-01-01
Fixed orthodontic appliances have been regarded as a common causative factor of oral lesions. To manage soft tissue discomfort, most orthodontists recommend using a small amount of utility wax over the brackets in order to alleviate trauma. This in vitro study aimed at evaluating friction generated by two types of bracket protectors (customized acetate protector [CAP] and temporary resin protector [TRP]) during the initial stages of orthodontic treatment. An experimental model (test unit) was used to assess friction. In order to measure the friction produced in each test, the model was attached to a mechanical testing machine which simulated maxillary canines alignment. Intergroup comparison was carried out by one-way ANOVA with level of significance set at 5%. The friction presented by the TRP group was statistically higher than that of the control group at 6 mm. It was also higher than in the control and CAP groups in terms of maximum friction. The customized acetate protector (CAP) demonstrated not to interfere in friction between the wire and the orthodontic bracket slot.
Scheimpflug imaged corneal changes on anterior and posterior surfaces after collagen cross-linking
Hassan, Ziad; Modis, Laszlo; Szalai, Eszter; Berta, Andras; Nemeth, Gabor
2014-01-01
AIM To compare the anterior and posterior corneal parameters before and after collagen cross-linking therapy for keratoconus. METHODS Collagen cross-linking was performed in 31 eyes of 31 keratoconus patients (mean age 30.6±8.9y). Prior to treatment and an average 7mo after therapy, Scheimpflug analysis was performed using Pentacam HR. In addition to corneal thickness assessments, corneal radius, elevation, and aberrometric measurements were performed both on anterior and posterior corneal surfaces. Data obtained before and after surgery were statistically analyzed. RESULTS In terms of horizontal and vertical corneal radius, and central corneal thickness no deviations were observed an average 7mo after operation. Corneal higher order aberration showed no difference neither on anterior nor on posterior corneal surfaces. During follow-up period, no significant deviation was detected regarding elevation values obtained by measurement in mm units between the 3.0-8.0 mm-zones. CONCLUSION Corneal stabilization could be observed in terms of anterior and posterior corneal surfaces, elevation and higher order aberration values 7mo after collagen cross-linking therapy for keratoconus. PMID:24790876
Assessing Complex Learning Objectives through Analytics
NASA Astrophysics Data System (ADS)
Horodyskyj, L.; Mead, C.; Buxner, S.; Semken, S. C.; Anbar, A. D.
2016-12-01
A significant obstacle to improving the quality of education is the lack of easy-to-use assessments of higher-order thinking. Most existing assessments focus on recall and understanding questions, which demonstrate lower-order thinking. Traditionally, higher-order thinking is assessed with practical tests and written responses, which are time-consuming to analyze and are not easily scalable. Computer-based learning environments offer the possibility of assessing such learning outcomes based on analysis of students' actions within an adaptive learning environment. Our fully online introductory science course, Habitable Worlds, uses an intelligent tutoring system that collects and responds to a range of behavioral data, including actions within the keystone project. This central project is a summative, game-like experience in which students synthesize and apply what they have learned throughout the course to identify and characterize a habitable planet from among hundreds of stars. Student performance is graded based on completion and accuracy, but two additional properties can be utilized to gauge higher-order thinking: (1) how efficient a student is with the virtual currency within the project and (2) how many of the optional milestones a student reached. In the project, students can use the currency to check their work and "unlock" convenience features. High-achieving students spend close to the minimum amount required to reach these goals, indicating a high-level of concept mastery and efficient methodology. Average students spend more, indicating effort, but lower mastery. Low-achieving students were more likely to spend very little, which indicates low effort. Differences on these metrics were statistically significant between all three of these populations. We interpret this as evidence that high-achieving students develop and apply efficient problem-solving skills as compared to lower-achieving student who use more brute-force approaches.
Hospital nurses' individual priorities, internal psychological states and work motivation.
Toode, K; Routasalo, P; Helminen, M; Suominen, T
2014-09-01
This study looks to describe the relationships between hospital nurses' individual priorities, internal psychological states and their work motivation. Connections between hospital nurses' work-related needs, values and work motivation are essential for providing safe and high quality health care. However, there is insufficient empirical knowledge concerning these connections for the practice development. A cross-sectional empirical research study was undertaken. A total of 201 registered nurses from all types of Estonian hospitals filled out an electronic self-reported questionnaire. Descriptive statistics, Mann-Whitney, Kruskal-Wallis and Spearman's correlation were used for data analysis. In individual priorities, higher order needs strength were negatively correlated with age and duration of service. Regarding nurses' internal psychological states, central hospital nurses had less sense of meaningfulness of work. Nurses' individual priorities (i.e. their higher order needs strength and shared values with the organization) correlated with their work motivation. Their internal psychological states (i.e. their experienced meaningfulness of work, experienced responsibility for work outcomes and their knowledge of results) correlated with intrinsic work motivation. Nurses who prioritize their higher order needs are more motivated to work. The more their own values are compatible with those of the organization, the more intrinsically motivated they are likely to be. Nurses' individual achievements, autonomy and training are key factors which influence their motivation to work. The small sample size and low response rate of the study limit the direct transferability of the findings to the wider nurse population, so further research is needed. This study highlights the need and importance to support nurses' professional development and self-determination, in order to develop and retain motivated nurses. It also indicates a need to value both nurses and nursing in healthcare policy and management. © 2014 International Council of Nurses.
Linear and non-linear bias: predictions versus measurements
NASA Astrophysics Data System (ADS)
Hoffmann, K.; Bel, J.; Gaztañaga, E.
2017-02-01
We study the linear and non-linear bias parameters which determine the mapping between the distributions of galaxies and the full matter density fields, comparing different measurements and predictions. Associating galaxies with dark matter haloes in the Marenostrum Institut de Ciències de l'Espai (MICE) Grand Challenge N-body simulation, we directly measure the bias parameters by comparing the smoothed density fluctuations of haloes and matter in the same region at different positions as a function of smoothing scale. Alternatively, we measure the bias parameters by matching the probability distributions of halo and matter density fluctuations, which can be applied to observations. These direct bias measurements are compared to corresponding measurements from two-point and different third-order correlations, as well as predictions from the peak-background model, which we presented in previous papers using the same data. We find an overall variation of the linear bias measurements and predictions of ˜5 per cent with respect to results from two-point correlations for different halo samples with masses between ˜1012and1015 h-1 M⊙ at the redshifts z = 0.0 and 0.5. Variations between the second- and third-order bias parameters from the different methods show larger variations, but with consistent trends in mass and redshift. The various bias measurements reveal a tight relation between the linear and the quadratic bias parameters, which is consistent with results from the literature based on simulations with different cosmologies. Such a universal relation might improve constraints on cosmological models, derived from second-order clustering statistics at small scales or higher order clustering statistics.
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Azim, M Ekram; Kumarappah, Ananthavalli; Bhavsar, Satyendra P; Backus, Sean M; Arhonditsis, George
2011-03-15
The temporal trends of total mercury (THg) in four fish species in Lake Erie were evaluated based on 35 years of fish contaminant data. Our Bayesian statistical approach consists of three steps aiming to address different questions. First, we used the exponential and mixed-order decay models to assess the declining rates in four intensively sampled fish species, i.e., walleye (Stizostedion vitreum), yellow perch (Perca flavescens), smallmouth bass (Micropterus dolomieui), and white bass (Morone chrysops). Because the two models postulate monotonic decrease of the THg levels, we included first- and second-order random walk terms in our statistical formulations to accommodate nonmonotonic patterns in the data time series. Our analysis identified a recent increase in the THg concentrations, particularly after the mid-1990s. In the second step, we used double exponential models to quantify the relative magnitude of the THg trends depending on the type of data used (skinless-boneless fillet versus whole fish data) and the fish species examined. The observed THg concentrations were significantly higher in skinless boneless fillet than in whole fish portions, while the whole fish portions of walleye exhibited faster decline rates and slower rates of increase relative to the skinless boneless fillet data. Our analysis also shows lower decline rates and higher rates of increase in walleye relative to the other three fish species examined. The food web structural shifts induced by the invasive species (dreissenid mussels and round goby) may be associated with the recent THg trends in Lake Erie fish.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozo, Eduardo; /U. Chicago /Chicago U., KICP; Wu, Hao-Yi
2011-11-04
When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators havemore » nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to {approx}2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of {approx}3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.« less
Paton, Susan; Thompson, Katy-Anne; Parks, Simon R; Bennett, Allan M
2015-08-01
The aim of this study was to quantify reaerosolization of microorganisms caused by walking on contaminated flooring to assess the risk to individuals accessing areas contaminated with pathogenic organisms, for example, spores of Bacillus anthracis. Industrial carpet and polyvinyl chloride (PVC) floor coverings were contaminated with aerosolized spores of Bacillus atrophaeus by using an artist airbrush to produce deposition of ∼10(3) to 10(4) CFU · cm(-2). Microbiological air samplers were used to quantify the particle size distribution of the aerosol generated when a person walked over the floorings in an environmental chamber. Results were expressed as reaerosolization factors (percent per square centimeter per liter), to represent the ratio of air concentration to surface concentration generated. Walking on carpet generated a statistically significantly higher reaerosolization factor value than did walking on PVC (t = 20.42; P < 0.001). Heavier walking produced a statistically significantly higher reaerosolization factor value than did lighter walking (t = 12.421; P < 0.001). Height also had a statistically significant effect on the reaerosolization factor, with higher rates of recovery of B. atrophaeus at lower levels, demonstrating a height-dependent gradient of particle reaerosolization. Particles in the respirable size range were recovered in all sampling scenarios (mass mean diameters ranged from 2.6 to 4.1 μm). The results of this study can be used to produce a risk assessment of the potential aerosol exposure of a person accessing areas with contaminated flooring in order to inform the choice of appropriate respiratory protective equipment and may aid in the selection of the most suitable flooring types for use in health care environments, to reduce aerosol transmission in the event of contamination. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Thompson, Katy-Anne; Parks, Simon R.; Bennett, Allan M.
2015-01-01
The aim of this study was to quantify reaerosolization of microorganisms caused by walking on contaminated flooring to assess the risk to individuals accessing areas contaminated with pathogenic organisms, for example, spores of Bacillus anthracis. Industrial carpet and polyvinyl chloride (PVC) floor coverings were contaminated with aerosolized spores of Bacillus atrophaeus by using an artist airbrush to produce deposition of ∼103 to 104 CFU · cm−2. Microbiological air samplers were used to quantify the particle size distribution of the aerosol generated when a person walked over the floorings in an environmental chamber. Results were expressed as reaerosolization factors (percent per square centimeter per liter), to represent the ratio of air concentration to surface concentration generated. Walking on carpet generated a statistically significantly higher reaerosolization factor value than did walking on PVC (t = 20.42; P < 0.001). Heavier walking produced a statistically significantly higher reaerosolization factor value than did lighter walking (t = 12.421; P < 0.001). Height also had a statistically significant effect on the reaerosolization factor, with higher rates of recovery of B. atrophaeus at lower levels, demonstrating a height-dependent gradient of particle reaerosolization. Particles in the respirable size range were recovered in all sampling scenarios (mass mean diameters ranged from 2.6 to 4.1 μm). The results of this study can be used to produce a risk assessment of the potential aerosol exposure of a person accessing areas with contaminated flooring in order to inform the choice of appropriate respiratory protective equipment and may aid in the selection of the most suitable flooring types for use in health care environments, to reduce aerosol transmission in the event of contamination. PMID:25979883
NASA Astrophysics Data System (ADS)
Birsan, Marius-Victor; Dumitrescu, Alexandru; Cǎrbunaru, Felicia
2016-04-01
The role of statistical downscaling is to model the relationship between large-scale atmospheric circulation and climatic variables on a regional and sub-regional scale, making use of the predictions of future circulation generated by General Circulation Models (GCMs) in order to capture the effects of climate change on smaller areas. The study presents a statistical downscaling model based on a neural network-based approach, by means of multi-layer perceptron networks. Sub-daily temperature data series from 81 meteorological stations over Romania, with full data records are used as predictands. As large-scale predictor, the NCEP/NCAD air temperature data at 850 hPa over the domain 20-30E / 40-50N was used, at a spatial resolution of 2.5×2.5 degrees. The period 1961-1990 was used for calibration, while the validation was realized over the 1991-2010 interval. Further, in order to estimate future changes in air temperature for 2021-2050 and 2071-2100, air temperature data at 850 hPa corresponding to the IPCC A1B scenario was extracted from the CNCM33 model (Meteo-France) and used as predictor. This work has been realized within the research project "Changes in climate extremes and associated impact in hydrological events in Romania" (CLIMHYDEX), code PN II-ID-2011-2-0073, financed by the Romanian Executive Agency for Higher Education Research, Development and Innovation Funding (UEFISCDI).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kooperman, Gabriel J.; Pritchard, Michael S.; Burt, Melissa A.
Changes in the character of rainfall are assessed using a holistic set of statistics based on rainfall frequency and amount distributions in climate change experiments with three conventional and superparameterized versions of the Community Atmosphere Model (CAM and SPCAM). Previous work has shown that high-order statistics of present-day rainfall intensity are significantly improved with superparameterization, especially in regions of tropical convection. Globally, the two modeling approaches project a similar future increase in mean rainfall, especially across the Inter-Tropical Convergence Zone (ITCZ) and at high latitudes, but over land, SPCAM predicts a smaller mean change than CAM. Changes in high-order statisticsmore » are similar at high latitudes in the two models but diverge at lower latitudes. In the tropics, SPCAM projects a large intensification of moderate and extreme rain rates in regions of organized convection associated with the Madden Julian Oscillation, ITCZ, monsoons, and tropical waves. In contrast, this signal is missing in all versions of CAM, which are found to be prone to predicting increases in the amount but not intensity of moderate rates. Predictions from SPCAM exhibit a scale-insensitive behavior with little dependence on horizontal resolution for extreme rates, while lower resolution (~2°) versions of CAM are not able to capture the response simulated with higher resolution (~1°). Furthermore, moderate rain rates analyzed by the “amount mode” and “amount median” are found to be especially telling as a diagnostic for evaluating climate model performance and tracing future changes in rainfall statistics to tropical wave modes in SPCAM.« less
The Short-Term Effects of Lying, Sitting and Standing on Energy Expenditure in Women
POPP, COLLIN J.; BRIDGES, WILLIAM C.; JESCH, ELLIOT D.
2018-01-01
The deleterious health effects of too much sitting have been associated with an increased risk for overweight and obesity. Replacing sitting with standing is the proposed intervention to increase daily energy expenditure (EE). The purpose of this study was to determine the short-term effects of lying, sitting, and standing postures on EE, and determine the magnitude of the effect each posture has on EE using indirect calorimetry (IC). Twenty-eight healthy females performed three separate positions (lying, sitting, standing) in random order. Inspired and expired gases were collected for 45-minutes (15 minutes for each position) using breath-by-breath indirect calorimetry. Oxygen consumption (VO2) and carbon dioxide production (VCO2) were measured to estimate EE. Statistical analyses used repeat measures ANOVA to analyze all variables and post hoc t-tests. Based on the ANOVA the individual, time period and order term did not result in a statistically significant difference. Lying EE and sitting EE were not different from each other (P = 0.56). However, standing EE (kcal/min) was 9.0 % greater than lying EE (kcal/min) (P = 0.003), and 7.1% greater than sitting EE (kcal/min) (P = 0.02). The energetic cost of standing was higher compared to lying and sitting. While this is statistically significant, the magnitude of the effect of standing when compared to sitting was small (Cohen’s d = 0.31). Short-term standing does not offer an energetic advantage when compared to sitting.
Serum albumin and the haloperidol pharmacokinectics. A study using a computational model
NASA Astrophysics Data System (ADS)
de Morais e Coura, Carla Patrícia; Paulino, Erica Tex; Cortez, Celia Martins; da Silva Fragoso, Viviane Muniz
2016-12-01
Here we are studying the binding of haloperidol with human and bovine sera albumins applying a computational model, based on spectrofluorimetry, statistical and mathematical knowledge. Haloperidol is one of the oldest antipsychotic drug in use for therapy of patients with acute and chronic schizophrenia. It was found that the fluorescence of HSA was quenched in 18.0 (± 0.2)% and for BSA it was 24.0 (± 0.9)%, for a ratio of 1/1000 of haloperidol/albumin. Results suggested that the primary binding site is located in the subdomain IB. Quenching constants of the albumin fluorescence by haloperidol were in the order of 107, approximately 100-fold higher than that found for risperidone, and about 1000-fold higher than that estimated for chlorpromazine and sulpiride.
Aydin, Y; Atis, A; Tutuman, T; Goker, N
2010-01-01
We aimed to find a prevalence of human papilloma virus (HPV) in order to define the 100 genotypes and subset of 14 oncogenic genotypes in pregnant Turkish women and to compare these with non-pregnant women. Cervical thin-prep specimens were obtained from 164 women in the first trimester pregnancy and 153 non pregnant women. 29.2% of pregnant versus 19.6% of non-pregnant Turkish women had at least one of the 100 types of HPV infection--a statistically significant difference. The rate of 14 high-risk HPV genotype infections was significantly higher in pregnant (14.6) compared to non-pregnant Turkish women (9.6%). Pregnant Turkish women are at higher risk for all HPV infections including high-risk cervical cancer genotypes.
Wu, Johnny C; Gardner, David P; Ozer, Stuart; Gutell, Robin R; Ren, Pengyu
2009-08-28
The accurate prediction of the secondary and tertiary structure of an RNA with different folding algorithms is dependent on several factors, including the energy functions. However, an RNA higher-order structure cannot be predicted accurately from its sequence based on a limited set of energy parameters. The inter- and intramolecular forces between this RNA and other small molecules and macromolecules, in addition to other factors in the cell such as pH, ionic strength, and temperature, influence the complex dynamics associated with transition of a single stranded RNA to its secondary and tertiary structure. Since all of the factors that affect the formation of an RNAs 3D structure cannot be determined experimentally, statistically derived potential energy has been used in the prediction of protein structure. In the current work, we evaluate the statistical free energy of various secondary structure motifs, including base-pair stacks, hairpin loops, and internal loops, using their statistical frequency obtained from the comparative analysis of more than 50,000 RNA sequences stored in the RNA Comparative Analysis Database (rCAD) at the Comparative RNA Web (CRW) Site. Statistical energy was computed from the structural statistics for several datasets. While the statistical energy for a base-pair stack correlates with experimentally derived free energy values, suggesting a Boltzmann-like distribution, variation is observed between different molecules and their location on the phylogenetic tree of life. Our statistical energy values calculated for several structural elements were utilized in the Mfold RNA-folding algorithm. The combined statistical energy values for base-pair stacks, hairpins and internal loop flanks result in a significant improvement in the accuracy of secondary structure prediction; the hairpin flanks contribute the most.
Romanyk, Dan L; George, Andrew; Li, Yin; Heo, Giseon; Carey, Jason P; Major, Paul W
2016-05-01
To investigate the influence of a rotational second-order bracket-archwire misalignment on the loads generated during third-order torque procedures. Specifically, torque in the second- and third-order directions was considered. An orthodontic torque simulator (OTS) was used to simulate the third-order torque between Damon Q brackets and 0.019 × 0.025-inch stainless steel archwires. Second-order misalignments were introduced in 0.5° increments from a neutral position, 0.0°, up to 3.0° of misalignment. A sample size of 30 brackets was used for each misalignment. The archwire was then rotated in the OTS from its neutral position up to 30° in 3° increments and then unloaded in the same increments. At each position, all forces and torques were recorded. Repeated-measures analysis of variance was used to determine if the second-order misalignments significantly affected torque values in the second- and third-order directions. From statistical analysis of the experimental data, it was found that the only statistically significant differences in third-order torque between a misaligned state and the neutral position occurred for 2.5° and 3.0° of misalignment, with mean differences of 2.54 Nmm and 2.33 Nmm, respectively. In addition, in pairwise comparisons of second-order torque for each misalignment increment, statistical differences were observed in all comparisons except for 0.0° vs 0.5° and 1.5° vs 2.0°. The introduction of a second-order misalignment during third-order torque simulation resulted in statistically significant differences in both second- and third-order torque response; however, the former is arguably clinically insignificant.
Sapsis, Themistoklis P; Majda, Andrew J
2013-08-20
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.
Ka-Band Atmospheric Phase Stability Measurements in Goldstone, CA; White Sands, NM; and Guam
NASA Technical Reports Server (NTRS)
Zemba, Michael J.; Morse, Jacquelynne Rose; Nessel, James A.
2014-01-01
As spacecraft communication links are driven to higher frequencies (e.g. Ka-band) both by spectrum congestion and the appeal of higher data rates, the propagation phenomena at these frequencies must be well characterized for effective system design. In particular, the phase stability of a site at a given frequency will govern whether or not the site is a practical location for an antenna array, particularly if uplink capabilities are desired. Propagation studies to characterize such phenomena must be done on a site-by-site basis due to the wide variety of climates and weather conditions at each ground terminal. Accordingly, in order to statistically characterize the atmospheric effects on Ka-Band links, site test interferometers (STIs) have been deployed at three of NASA's operational sites to directly measure each site's tropospheric phase stability. Using three years of results from these experiments, this paper will statistically characterize the simultaneous atmospheric phase noise measurements recorded by the STIs deployed at the following ground station sites: the Goldstone Deep Space Communications Complex near Barstow, CA; the White Sands Ground Terminal near Las Cruces, NM; and the Guam Remote Ground Terminal on the island of Guam.
The Soil Moisture Dependence of TRMM Microwave Imager Rainfall Estimates
NASA Astrophysics Data System (ADS)
Seyyedi, H.; Anagnostou, E. N.
2011-12-01
This study presents an in-depth analysis of the dependence of overland rainfall estimates from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) on the soil moisture conditions at the land surface. TMI retrievals are verified against rainfall fields derived from a high resolution rain-gauge network (MESONET) covering Oklahoma. Soil moisture (SOM) patterns are extracted based on recorded data from 2000-2007 with 30 minutes temporal resolution. The area is divided into wet and dry regions based on normalized SOM (Nsom) values. Statistical comparison between two groups is conducted based on recorded ground station measurements and the corresponding passive microwave retrievals from TMI overpasses at the respective MESONET station location and time. The zero order error statistics show that the Probability of Detection (POD) for the wet regions (higher Nsom values) is higher than the dry regions. The Falls Alarm Ratio (FAR) and volumetric FAR is lower for the wet regions. The volumetric missed rain for the wet region is lower than dry region. Analysis of the MESONET-to-TMI ratio values shows that TMI tends to overestimate for surface rainfall intensities less than 12 (mm/h), however the magnitude of the overestimation over the wet regions is lower than the dry regions.
CAVALCANTI, Andrea Nóbrega; MARCHI, Giselle Maria; AMBROSANO, Gláucia Maria Bovi
2010-01-01
Statistical analysis interpretation is a critical field in scientific research. When there is more than one main variable being studied in a research, the effect of the interaction between those variables is fundamental on experiments discussion. However, some doubts can occur when the p-value of the interaction is greater than the significance level. Objective To determine the most adequate interpretation for factorial experiments with p-values of the interaction nearly higher than the significance level. Materials and methods The p-values of the interactions found in two restorative dentistry experiments (0.053 and 0.068) were interpreted in two distinct ways: considering the interaction as not significant and as significant. Results Different findings were observed between the two analyses, and studies results became more coherent when the significant interaction was used. Conclusion The p-value of the interaction between main variables must be analyzed with caution because it can change the outcomes of research studies. Researchers are strongly advised to interpret carefully the results of their statistical analysis in order to discuss the findings of their experiments properly. PMID:20857003
Statistical Equilibria of Turbulence on Surfaces of Different Symmetry
NASA Astrophysics Data System (ADS)
Qi, Wanming; Marston, Brad
2012-02-01
We test the validity of statistical descriptions of freely decaying 2D turbulence by performing direct numerical simulations (DNS) of the Euler equation with hyperviscosity on a square torus and on a sphere. DNS shows, at long times, a dipolar coherent structure in the vorticity field on the torus but a quadrapole on the sphereootnotetextJ. Y-K. Cho and L. Polvani, Phys. Fluids 8, 1531 (1996).. A truncated Miller-Robert-Sommeria theoryootnotetextA. J. Majda and X. Wang, Nonlinear Dynamics and Statistical Theories for Basic Geophysical Flows (Cambridge University Press, 2006). can explain the difference. The theory conserves up to the second-order Casimir, while also respecting conservation laws that reflect the symmetry of the domain. We further show that it is equivalent to the phenomenological minimum-enstrophy principle by generalizing the work by Naso et al.ootnotetextA. Naso, P. H. Chavanis, and B. Dubrulle, Eur. Phys. J. B 77, 284 (2010). to the sphere. To explain finer structures of the coherent states seen in DNS, especially the phenomenon of confinement, we investigate the perturbative inclusion of the higher Casimir constraints.
Nguyen, Thanh H; Cho, Hyun-Hee; Poster, Dianne L; Ball, William P
2007-02-15
Sorption isotherms for five aromatic hydrocarbons were obtained with a natural wood char (NC1) and its residue after solvent extraction (ENC1). Substantial isotherm nonlinearity was observed in all cases. ENC1 showed higher BET surface area, higher nitrogen-accessible micropore volume, and lower mass of extractable organic chemicals, including quantifiable polycyclic aromatic hydrocarbons (PAHs),while the two chars showed identical surface oxygen/ carbon (O/C) ratio. For two chlorinated benzenes that normally condense as liquids at the temperatures used, sorption isotherms with NC1 and ENC1 were found to be statistically identical. For the solid-phase compounds (1,4-dichlorobenzene (1,4-DCB) and two PAHs), sorption was statistically higher with ENC1, thus demonstrating sorption effects due to both (1) authigenic organic content in the sorbentand (2)the sorbate's condensed state. Polanyi-based isotherm modeling, pore size measurements, and comparisons with activated carbon showthe relative importance of adsorptive pore filling and help explain results. With both chars, maximum sorption increased in the order of decreasing molecular diameter: phenanthrene < naphthalene < 1,2-dichlorobenzene/1,2,4-trichlorobenzene < 1,4-DCB. Comparison of 1,4- and 1,2-DCB shows that the critical molecular diameter was apparently more important than the condensed state, suggesting that 1,4-DCB sorbed in the liquid state for ENC1.
van Schie, J T; Bakker, E M; van Weeren, P R
1999-01-01
The objective of the in vitro experiments described in this paper was to quantify the effects of some instrumental variables on the quantitative evaluation, by means of first-order gray-level statistics, of ultrasonographic images of equine tendons. The experiments were done on three isolated equine superficial digital flexor tendons that were mounted in a frame and submerged in a waterbath. Sections with either normal tendon tissue, an acute lesion, or a chronic scar, were selected. In these sections, the following experiments were done: 1) a gradual increase of total amplifier gain output subdivided in 12 equal steps; 2) a transducer tilt plus or minus 3 degrees from perpendicular, with steps of 1 degree; and 3) a transducer displacement along, and perpendicular to, the tendon long axis, with 16 steps of 0.25 mm each. Transverse ultrasonographic images were collected, and in the regions of interest (ROI) first-order gray-level statistics were calculated to quantify the effects of each experiment. Some important observations were: 1) the total amplifier gain output has a substantial influence on the ultrasonographic image; for example, in the case of an acute lesion, a low gain setting results in an almost completely black image; whereas, with higher gain settings, a marked "filling in" effect on the lesion can be observed; 2) the relative effects of the tilting of the transducer are substantial in normal tendon tissue (18%) and chronic scar (12%); whereas, in the event of an acute lesion, the effects on the mean gray level are dramatic (40%); and 3) the relative effects of displacement of the transducer are small in normal tendon tissue, but on the other hand, the mean gray-level changes 7% in chronic scar, and even 20% in an acute lesion. In general, slight variations in scanner settings and transducer handling can have considerable effects on the gray levels of the ultrasonographic image. Furthermore, there is a strong indication that this quantitative method, as far as based exclusively on the first-order gray-level statistics, may be not discriminative enough to accurately assess the integrity of the tendon. Therefore, the value of a quantitative evaluation of the first-order gray-level statistics for the assessment of the integrity of the equine tendon is questionable.
The discrimination of sea ice types using SAR backscatter statistics
NASA Technical Reports Server (NTRS)
Shuchman, Robert A.; Wackerman, Christopher C.; Maffett, Andrew L.; Onstott, Robert G.; Sutherland, Laura L.
1989-01-01
X-band (HH) synthetic aperture radar (SAR) data of sea ice collected during the Marginal Ice Zone Experiment in March and April of 1987 was statistically analyzed with respect to discriminating open water, first-year ice, multiyear ice, and Odden. Odden are large expanses of nilas ice that rapidly form in the Greenland Sea and transform into pancake ice. A first-order statistical analysis indicated that mean versus variance can segment out open water and first-year ice, and skewness versus modified skewness can segment the Odden and multilayer categories. In additions to first-order statistics, a model has been generated for the distribution function of the SAR ice data. Segmentation of ice types was also attempted using textural measurements. In this case, the general co-occurency matrix was evaluated. The textural method did not generate better results than the first-order statistical approach.
Effective potentials in nonlinear polycrystals and quadrature formulae
NASA Astrophysics Data System (ADS)
Michel, Jean-Claude; Suquet, Pierre
2017-08-01
This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471, 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.
Effective potentials in nonlinear polycrystals and quadrature formulae.
Michel, Jean-Claude; Suquet, Pierre
2017-08-01
This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471 , 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.
Generalized statistical convergence of order β for sequences of fuzzy numbers
NASA Astrophysics Data System (ADS)
Altınok, Hıfsı; Karakaş, Abdulkadir; Altın, Yavuz
2018-01-01
In the present paper, we introduce the concepts of Δm-statistical convergence of order β for sequences of fuzzy numbers and strongly Δm-summable of order β for sequences of fuzzy numbers by using a modulus function f and taking supremum on metric d for 0 < β ≤ 1 and give some inclusion relations between them.
Scheraga, H A; Paine, G H
1986-01-01
We are using a variety of theoretical and computational techniques to study protein structure, protein folding, and higher-order structures. Our earlier work involved treatments of liquid water and aqueous solutions of nonpolar and polar solutes, computations of the stabilities of the fundamental structures of proteins and their packing arrangements, conformations of small cyclic and open-chain peptides, structures of fibrous proteins (collagen), structures of homologous globular proteins, introduction of special procedures as constraints during energy minimization of globular proteins, and structures of enzyme-substrate complexes. Recently, we presented a new methodology for predicting polypeptide structure (described here); the method is based on the calculation of the probable and average conformation of a polypeptide chain by the application of equilibrium statistical mechanics in conjunction with an adaptive, importance sampling Monte Carlo algorithm. As a test, it was applied to Met-enkephalin.
Higher certainty of the laser-induced damage threshold test with a redistributing data treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Lars; Mrohs, Marius; Gyamfi, Mark
2015-10-15
As a consequence of its statistical nature, the measurement of the laser-induced damage threshold holds always risks to over- or underestimate the real threshold value. As one of the established measurement procedures, the results of S-on-1 (and 1-on-1) tests outlined in the corresponding ISO standard 21 254 depend on the amount of data points and their distribution over the fluence scale. With the limited space on a test sample as well as the requirements on test site separation and beam sizes, the amount of data from one test is restricted. This paper reports on a way to treat damage testmore » data in order to reduce the statistical error and therefore measurement uncertainty. Three simple assumptions allow for the assignment of one data point to multiple data bins and therefore virtually increase the available data base.« less
NASA Astrophysics Data System (ADS)
Matsubara, Takahiko
2003-02-01
We formulate a general method for perturbative evaluations of statistics of smoothed cosmic fields and provide useful formulae for application of the perturbation theory to various statistics. This formalism is an extensive generalization of the method used by Matsubara, who derived a weakly nonlinear formula of the genus statistic in a three-dimensional density field. After describing the general method, we apply the formalism to a series of statistics, including genus statistics, level-crossing statistics, Minkowski functionals, and a density extrema statistic, regardless of the dimensions in which each statistic is defined. The relation between the Minkowski functionals and other geometrical statistics is clarified. These statistics can be applied to several cosmic fields, including three-dimensional density field, three-dimensional velocity field, two-dimensional projected density field, and so forth. The results are detailed for second-order theory of the formalism. The effect of the bias is discussed. The statistics of smoothed cosmic fields as functions of rescaled threshold by volume fraction are discussed in the framework of second-order perturbation theory. In CDM-like models, their functional deviations from linear predictions plotted against the rescaled threshold are generally much smaller than that plotted against the direct threshold. There is still a slight meatball shift against rescaled threshold, which is characterized by asymmetry in depths of troughs in the genus curve. A theory-motivated asymmetry factor in the genus curve is proposed.
A note on generalized Genome Scan Meta-Analysis statistics
Koziol, James A; Feng, Anne C
2005-01-01
Background Wise et al. introduced a rank-based statistical technique for meta-analysis of genome scans, the Genome Scan Meta-Analysis (GSMA) method. Levinson et al. recently described two generalizations of the GSMA statistic: (i) a weighted version of the GSMA statistic, so that different studies could be ascribed different weights for analysis; and (ii) an order statistic approach, reflecting the fact that a GSMA statistic can be computed for each chromosomal region or bin width across the various genome scan studies. Results We provide an Edgeworth approximation to the null distribution of the weighted GSMA statistic, and, we examine the limiting distribution of the GSMA statistics under the order statistic formulation, and quantify the relevance of the pairwise correlations of the GSMA statistics across different bins on this limiting distribution. We also remark on aggregate criteria and multiple testing for determining significance of GSMA results. Conclusion Theoretical considerations detailed herein can lead to clarification and simplification of testing criteria for generalizations of the GSMA statistic. PMID:15717930
Bumgarner, Johnathan R; McCray, John E
2007-06-01
During operation of an onsite wastewater treatment system, a low-permeability biozone develops at the infiltrative surface (IS) during application of wastewater to soil. Inverse numerical-model simulations were used to estimate the biozone saturated hydraulic conductivity (K(biozone)) under variably saturated conditions for 29 wastewater infiltration test cells installed in a sandy loam field soil. Test cells employed two loading rates (4 and 8cm/day) and 3 IS designs: open chamber, gravel, and synthetic bundles. The ratio of K(biozone) to the saturated hydraulic conductivity of the natural soil (K(s)) was used to quantify the reductions in the IS hydraulic conductivity. A smaller value of K(biozone)/K(s,) reflects a greater reduction in hydraulic conductivity. The IS hydraulic conductivity was reduced by 1-3 orders of magnitude. The reduction in IS hydraulic conductivity was primarily influenced by wastewater loading rate and IS type and not by the K(s) of the native soil. The higher loading rate yielded greater reductions in IS hydraulic conductivity than the lower loading rate for bundle and gravel cells, but the difference was not statistically significant for chamber cells. Bundle and gravel cells exhibited a greater reduction in IS hydraulic conductivity than chamber cells at the higher loading rates, while the difference between gravel and bundle systems was not statistically significant. At the lower rate, bundle cells exhibited generally lower K(biozone)/K(s) values, but not at a statistically significant level, while gravel and chamber cells were statistically similar. Gravel cells exhibited the greatest variability in measured values, which may complicate design efforts based on K(biozone) evaluations for these systems. These results suggest that chamber systems may provide for a more robust design, particularly for high or variable wastewater infiltration rates.
Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P
1999-01-01
Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149
Bettenbühl, Mario; Rusconi, Marco; Engbert, Ralf; Holschneider, Matthias
2012-01-01
Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.
Rasch analysis of the hospital anxiety and depression scale among Chinese cataract patients.
Lin, Xianchai; Chen, Ziyan; Jin, Ling; Gao, Wuyou; Qu, Bo; Zuo, Yajing; Liu, Rongjiao; Yu, Minbin
2017-01-01
To analyze the validity of the Hospital Anxiety and Depression Scale (HADS) among Chinese cataract population. A total of 275 participants with unilateral or bilateral cataract were recruited to complete the Chinese version of HADS. The patients' demographic and ophthalmic characteristics were documented. Rasch analysis was conducted to examine the model fit statistics, the thresholds ordering of the polytomous items, targeting, person separation index and reliability, local dependency, unidimentionality, differential item functioning (DIF) and construct validity of the HADS individual and summary measures. Rasch analysis was performed on anxiety and depression subscales as well as HADS-Total score respectively. The items of original HADS-Anxiety, HADS-Depression and HADS-Total demonstrated evidence of misfit of the Rasch model. Removing items A7 for anxiety subscale and rescoring items D14 for depression subscale significantly improved Rasch model fit. A 12-item higher order total scale with further removal of D12 was found to fit the Rasch model. The modified items had ordered response thresholds. No uniform DIF was detected, whereas notable non-uniform DIF in high-ability group was found. The revised cut-off points were given for the modified anxiety and depression subscales. The modified version of HADS with HADS-A and HADS-D as subscale and HADS-T as a higher-order measure is a reliable and valid instrument that may be useful for assessing anxiety and depression states in Chinese cataract population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eriksen, Janus J., E-mail: janusje@chem.au.dk; Jørgensen, Poul; Matthews, Devin A.
The accuracy at which total energies of open-shell atoms and organic radicals may be calculated is assessed for selected coupled cluster perturbative triples expansions, all of which augment the coupled cluster singles and doubles (CCSD) energy by a non-iterative correction for the effect of triple excitations. Namely, the second- through sixth-order models of the recently proposed CCSD(T–n) triples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the acclaimed CCSD(T) model for both unrestricted as well as restricted open-shell Hartree-Fock (UHF/ROHF) reference determinants. By comparing UHF- and ROHF-based statistical results for a test setmore » of 18 modest-sized open-shell species with comparable RHF-based results, no behavioral differences are observed for the higher-order models of the CCSD(T–n) series in their correlated descriptions of closed- and open-shell species. In particular, we find that the convergence rate throughout the series towards the coupled cluster singles, doubles, and triples (CCSDT) solution is identical for the two cases. For the CCSD(T) model, on the other hand, not only its numerical consistency, but also its established, yet fortuitous cancellation of errors breaks down in the transition from closed- to open-shell systems. The higher-order CCSD(T–n) models (orders n > 3) thus offer a consistent and significant improvement in accuracy relative to CCSDT over the CCSD(T) model, equally for RHF, UHF, and ROHF reference determinants, albeit at an increased computational cost.« less
The integrated bispectrum and beyond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munshi, Dipak; Coles, Peter, E-mail: D.Munshi@sussex.ac.uk, E-mail: P.Coles@sussex.ac.uk
2017-02-01
The position-dependent power spectrum has been recently proposed as a descriptor of gravitationally induced non-Gaussianity in galaxy clustering, as it is sensitive to the 'soft limit' of the bispectrum (i.e. when one of the wave number tends to zero). We generalise this concept to higher order and clarify their relationship to other known statistics such as the skew-spectrum, the kurt-spectra and their real-space counterparts the cumulants correlators. Using the Hierarchical Ansatz (HA) as a toy model for the higher order correlation hierarchy, we show how in the soft limit, polyspectra at a given order can be identified with lower ordermore » polyspectra with the same geometrical dependence but with renormalised amplitudes expressed in terms of amplitudes of the original polyspectra. We extend the concept of position-dependent bispectrum to bispectrum of the divergence of the velocity field Θ and mixed multispectra involving δ and Θ in the 3D perturbative regime. To quantify the effects of transients in numerical simulations, we also present results for lowest order in Lagrangian perturbation theory (LPT) or the Zel'dovich approximation (ZA). Finally, we discuss how to extend the position-dependent spectrum concept to encompass cross-spectra. And finally study the application of this concept to two dimensions (2D), for projected galaxy maps, convergence κ maps from weak-lensing surveys or maps of CMB secondaries e.g. the frequency cleaned y -parameter maps of thermal Sunyaev-Zel'dovich (tSZ) effect from CMB surveys.« less
On the application of Rice's exceedance statistics to atmospheric turbulence.
NASA Technical Reports Server (NTRS)
Chen, W. Y.
1972-01-01
Discrepancies produced by the application of Rice's exceedance statistics to atmospheric turbulence are examined. First- and second-order densities from several data sources have been measured for this purpose. Particular care was paid to each selection of turbulence that provides stationary mean and variance over the entire segment. Results show that even for a stationary segment of turbulence, the process is still highly non-Gaussian, in spite of a Gaussian appearance for its first-order distribution. Data also indicate strongly non-Gaussian second-order distributions. It is therefore concluded that even stationary atmospheric turbulence with a normal first-order distribution cannot be considered a Gaussian process, and consequently the application of Rice's exceedance statistics should be approached with caution.
An order statistics approach to the halo model for galaxies
NASA Astrophysics Data System (ADS)
Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.
2017-04-01
We use the halo model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the 'central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the lognormal distribution around this mean and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering; however, this model predicts no luminosity dependence of large-scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically underpredicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the halo model for galaxies with more physically motivated galaxy formation models.
Generalization of Entropy Based Divergence Measures for Symbolic Sequence Analysis
Ré, Miguel A.; Azad, Rajeev K.
2014-01-01
Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms. PMID:24728338
Generalization of entropy based divergence measures for symbolic sequence analysis.
Ré, Miguel A; Azad, Rajeev K
2014-01-01
Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms.
Nogami, M; Takatsu, A; Endo, N; Ishiyama, I
1999-01-01
The immediately early gene product c-fos is known to be induced in neurons under noxious stimuli. Therefore, the immunohistochemistry of c-fos expression in human brains might offer information on the localization of stimulated neurons. In this study, the immunohistochemical localization of c-fos was studied in the neurons of the hypoglossal nucleus (XII), the dorsal motor nucleus of the vagal nerve (X), the nucleus solitarius (Sol), the accessory cuneate nucleus (Cun), the spinal trigeminal nucleus (V) and the inferior olive (Oli) of the human medulla oblongata from forensic autopsy cases. The neurons in the X nucleus showed the highest percentage of positive reactions for c-fos, followed in descending order by the Cun, V, Oli, XII and Sol. The c-fos immunoreactivity in the Cun and X was statistically significantly higher than in the Sol, XII and Oli. Although neurons in the Sol are known to be involved in respiration, there was no statistically significant difference in the c-fos immunoreactivity in the neurons in the Sol between asphyxia and non-asphyxia cases. On the other hand, the percentage of neurons positive for the c-fos immunoreactivity was statistically significantly higher in the Oli of asphyxia cases than of non-asphyxia cases. Our results indicate the difference in the immunoreactivity of c-fos among the nuclei of the human medulla oblongata and that the c-fos immunoreactivity in the Oli might assist the diagnosis of asphyxia.
On the spontaneous collective motion of active matter
Wang, Shenshen; Wolynes, Peter G.
2011-01-01
Spontaneous directed motion, a hallmark of cell biology, is unusual in classical statistical physics. Here we study, using both numerical and analytical methods, organized motion in models of the cytoskeleton in which constituents are driven by energy-consuming motors. Although systems driven by small-step motors are described by an effective temperature and are thus quiescent, at higher order in step size, both homogeneous and inhomogeneous, flowing and oscillating behavior emerges. Motors that respond with a negative susceptibility to imposed forces lead to an apparent negative-temperature system in which beautiful structures form resembling the asters seen in cell division. PMID:21876141
On the spontaneous collective motion of active matter.
Wang, Shenshen; Wolynes, Peter G
2011-09-13
Spontaneous directed motion, a hallmark of cell biology, is unusual in classical statistical physics. Here we study, using both numerical and analytical methods, organized motion in models of the cytoskeleton in which constituents are driven by energy-consuming motors. Although systems driven by small-step motors are described by an effective temperature and are thus quiescent, at higher order in step size, both homogeneous and inhomogeneous, flowing and oscillating behavior emerges. Motors that respond with a negative susceptibility to imposed forces lead to an apparent negative-temperature system in which beautiful structures form resembling the asters seen in cell division.
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
NASA Astrophysics Data System (ADS)
Zhou, Hui
It is the inevitable outcome of higher education reform to carry out office and departmental target responsibility system, in which statistical processing of student's information is an important part of student's performance review. On the basis of the analysis of the student's evaluation, the student information management database application system is designed by using relational database management system software in this paper. In order to implement the function of student information management, the functional requirement, overall structure, data sheets and fields, data sheet Association and software codes are designed in details.
Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.
Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M
2009-04-03
We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.
Park, Cheol Eon; Shin, Seung Youp; Lee, Kun Hee; Cho, Joong Saeng; Kim, Sung Wan
2012-09-01
Both allergic rhinitis (AR) and obstructive sleep apnea (OSA) are known to increase stress and fatigue, but the result of their coexistence has not been studied. The objective of this study was to evaluate the amount of stress and fatigue when AR is combined with OSA. One hundred and twelve patients diagnosed with OSA by polysomnography were enrolled. Among them, 37 patients were diagnosed with AR by a skin prick test and symptoms (OSA-AR group) and 75 patients were classified into the OSA group since they tested negative for allergies. We evaluated the Epworth sleepiness scale (ESS), stress score, fatigue score, ability to cope with stress, and rhinosinusitis quality of life questionnaire (RQLQ) with questionnaires and statistically compared the scores of both groups. There were no significant differences in BMI and sleep parameters such as LSAT, AHI, and RERA between the two groups. However, the OSA-AR group showed a significantly higher ESS score compared to the OSA group (13.7 ± 4.7 vs. 9.3 ± 4.8). Fatigue scores were also significantly higher in the OSA-AR group than in the OSA group (39.8 ± 11.0 vs. 30.6 ± 5.4). The OSA-AR group had a significantly higher stress score (60.4 ± 18.6 vs. 51.2 ± 10.4). The ability to cope with stress was higher in the OSA group, although this difference was not statistically significant. RQLQ scores were higher in the OSA-AR group (60.2 ± 16.7 compared to 25.1 ± 13.9). In conclusion, management of allergic rhinitis is very important in treating OSA patients in order to eliminate stress and fatigue and to minimize daytime sleepiness and quality of life.
Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven
2012-01-01
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921
Tekin, Kemal; Sonmez, Kenan; Inanc, Merve; Ozdemir, Kubra; Goker, Yasin Sakir; Yilmazbas, Pelin
2018-04-01
To evaluate the corneal topographic changes and postvitrectomy astigmatism after 27-gauge (g) microincision vitrectomy surgery (MIVS) by using Pentacam HR-Scheimpflug imaging system. This prospective descriptive study included 30 eyes of 30 patients who underwent 27-g MIVS. All eyes underwent a Pentacam HR examination preoperatively and on the first week, first month and third month postoperatively. The power of the corneal astigmatism, mean keratometry (K m ), K 1 and K 2 values and corneal asphericity (Q value) values for the both front and back surfaces of the cornea, index of surface variance (ISV), index of vertical asymmetry (IVA), index of height asymmetry (IHA), index of height decentration (IHD) and higher-order aberrations including coma, trefoil, spherical aberration, higher-order root-mean-square and total RMS were recorded. Additionally, the mean induced astigmatism was estimated by vector analysis. No statistically significant changes were observed in the mean power of corneal astigmatism, mean keratometry, K 1 and K 2 values, corneal asphericity values, ISV, IVA, IHA, IHD and higher-order aberrations on the first week, first month and third month after the operation. The mean surgically induced astigmatism was calculated as 0.23 ± 0.11 D on the first week, 0.19 ± 0.10 D on the first month and 0.19 ± 0.08 D on the third month postoperatively. Minor corneal surface and induced astigmatic changes are expected to result in rapid visual rehabilitation after pars plana vitrectomy with the 27-g MIVS system.
Teaching Statistics in Integration with Psychology
ERIC Educational Resources Information Center
Wiberg, Marie
2009-01-01
The aim was to revise a statistics course in order to get the students motivated to learn statistics and to integrate statistics more throughout a psychology course. Further, we wish to make students become more interested in statistics and to help them see the importance of using statistics in psychology research. To achieve this goal, several…
Analytical theory of mesoscopic Bose-Einstein condensation in an ideal gas
NASA Astrophysics Data System (ADS)
Kocharovsky, Vitaly V.; Kocharovsky, Vladimir V.
2010-03-01
We find the universal structure and scaling of the Bose-Einstein condensation (BEC) statistics and thermodynamics (Gibbs free energy, average energy, heat capacity) for a mesoscopic canonical-ensemble ideal gas in a trap with an arbitrary number of atoms, any volume, and any temperature, including the whole critical region. We identify a universal constraint-cutoff mechanism that makes BEC fluctuations strongly non-Gaussian and is responsible for all unusual critical phenomena of the BEC phase transition in the ideal gas. The main result is an analytical solution to the problem of critical phenomena. It is derived by, first, calculating analytically the universal probability distribution of the noncondensate occupation, or a Landau function, and then using it for the analytical calculation of the universal functions for the particular physical quantities via the exact formulas which express the constraint-cutoff mechanism. We find asymptotics of that analytical solution as well as its simple analytical approximations which describe the universal structure of the critical region in terms of the parabolic cylinder or confluent hypergeometric functions. The obtained results for the order parameter, all higher-order moments of BEC fluctuations, and thermodynamic quantities perfectly match the known asymptotics outside the critical region for both low and high temperature limits. We suggest two- and three-level trap models of BEC and find their exact solutions in terms of the cutoff negative binomial distribution (which tends to the cutoff gamma distribution in the continuous limit) and the confluent hypergeometric distribution, respectively. Also, we present an exactly solvable cutoff Gaussian model of BEC in a degenerate interacting gas. All these exact solutions confirm the universality and constraint-cutoff origin of the strongly non-Gaussian BEC statistics. We introduce a regular refinement scheme for the condensate statistics approximations on the basis of the infrared universality of higher-order cumulants and the method of superposition and show how to model BEC statistics in the actual traps. In particular, we find that the three-level trap model with matching the first four or five cumulants is enough to yield remarkably accurate results for all interesting quantities in the whole critical region. We derive an exact multinomial expansion for the noncondensate occupation probability distribution and find its high-temperature asymptotics (Poisson distribution) and corrections to it. Finally, we demonstrate that the critical exponents and a few known terms of the Taylor expansion of the universal functions, which were calculated previously from fitting the finite-size simulations within the phenomenological renormalization-group theory, can be easily obtained from the presented full analytical solutions for the mesoscopic BEC as certain approximations in the close vicinity of the critical point.
Fish consumption and hair mercury levels in women of childbearing age, Martin County, Florida.
Nair, Anil; Jordan, Melissa; Watkins, Sharon; Washam, Robert; DuClos, Chris; Jones, Serena; Palcic, Jason; Pawlowicz, Marek; Blackmore, Carina
2014-12-01
The health effects of mercury in humans are mostly on the developing nervous system. Pregnant women and women who are breastfeeding must be targeted in order to decrease mercury exposure to the populations at highest risk-infants, unborn fetuses, and young children. This purpose of this study is to understand the demographics of fish-consumption patterns among women of childbearing age (including pregnant women) in Martin County, Florida, and to analyze the associations of mercury levels in participants' hair with socio-demographic variables in order to better design prevention messages and campaigns. Mercury concentrations in hair samples of 408 women ages 18-49 were assessed. Data on demographic factors, pregnancy status, fish consumption, and awareness of fish advisories were collected during personal interviews. Data were analyzed using descriptive statistics and multivariate logistic regression. The geometric and arithmetic means of hair mercury concentration were 0.371 and 0.676 µg/g of hair. One-fourth of the respondents had a concentration ≥1 µg/g of hair. Consuming a higher number of fish meals per month, consumption of commercially purchased or locally caught fish higher in mercury, White race and income ≥$75,000 were positively associated with the likelihood of having higher hair mercury levels. This study confirms the existence of a higher overall mean hair mercury level and a higher percentage of women with ≥1 µg/g hair mercury level than those reported at the national level and in other regional studies. This suggests the need for region-specific fish consumption advisories to minimize mercury exposure in humans.
Lollier, Allison; Rodriguez, Elisa M; Saad-Harfouche, Frances G; Widman, Christy A; Mahoney, Martin C
2018-06-01
This pilot study was undertaken to identify characteristics and approaches (e.g., social, behavioral, and/or systems factors) which differentiate primary care medical offices achieving higher rates of HPV vaccination. Eligible primary care practice sites providing care to adolescent patients were recruited within an eight county region of western New York State between June 2016 and July 2016. Practice sites were categorized as higher (n = 3) or lower performing (n = 2) based on three dose series completion rates for HPV vaccinations among females aged 13-17 years. Interviewer administered surveys were completed with office staff (n = 37) and focused on understanding approaches to adolescent vaccination. Results were summarized using basic descriptive statistics. Higher performing offices reported more full-time clinical staff (median = 25 vs. 9.5 in lower performing clinics), larger panels of patients ages 11-17 years (median = 3541 vs. 925) and completion of NYSIIS data entry within two weeks of vaccination. (less than a month vs. two). Staff in higher performing offices reviewed medical charts prior to scheduled visits (100% vs. 50) and identified their office vaccine champion as a physician and/or a nurse manager (75% vs. 22%). Also, staffs from higher performing offices were more likely to report the combination of having an office vaccine champion, previewing charts and using standing orders. These preliminary findings support future research examining implementation of organizational processes including identifying a vaccine champion, using standing orders and previewing medical charts prior to office visits as strategies to increase rates of HPV vaccination in primary care offices.
Financial Statistics of Institutions of Higher Education: Property, 1969-70.
ERIC Educational Resources Information Center
Mertins, Paul F.; Brandt, Norman J.
This publication presents a part of the data provided by institutions of higher education in response to a questionnaire entitled "Financial Statistics of Institutions of Higher Education, 1969-70," which was included in the fifth annual Higher Education General Information Survey (HEGIS). This publication deals with the property related data.…
The Attitude of Iranian Nurses About Do Not Resuscitate Orders
Mogadasian, Sima; Abdollahzadeh, Farahnaz; Rahmani, Azad; Ferguson, Caleb; Pakanzad, Fermisk; Pakpour, Vahid; Heidarzadeh, Hamid
2014-01-01
Background: Do not resuscitate (DNR) orders are one of many challenging issues in end of life care. Previous research has not investigated Muslim nurses’ attitudes towards DNR orders. Aims: This study aims to investigate the attitude of Iranian nurses towards DNR orders and determine the role of religious sects in forming attitudes. Materials and Methods: In this descriptive-comparative study, 306 nurses from five hospitals affiliated to Tabriz University of Medical Sciences (TUOMS) in East Azerbaijan Province and three hospitals in Kurdistan province participated. Data were gathered by a survey design on attitudes on DNR orders. Data were analyzed using Statistical Package for Social Sciences (SPSS Inc., Chicago, IL) software examining descriptive and inferential statistics. Results: Participants showed their willingness to learn more about DNR orders and highlights the importance of respecting patients and their families in DNR orders. In contrast, in many key items participants reported their negative attitude towards DNR orders. There were statistical differences in two items between the attitude of Shiite and Sunni nurses. Conclusions: Iranian nurses, regardless of their religious sects, reported negative attitude towards many aspects of DNR orders. It may be possible to change the attitude of Iranian nurses towards DNR through education. PMID:24600178
The attitude of Iranian nurses about do not resuscitate orders.
Mogadasian, Sima; Abdollahzadeh, Farahnaz; Rahmani, Azad; Ferguson, Caleb; Pakanzad, Fermisk; Pakpour, Vahid; Heidarzadeh, Hamid
2014-01-01
Do not resuscitate (DNR) orders are one of many challenging issues in end of life care. Previous research has not investigated Muslim nurses' attitudes towards DNR orders. This study aims to investigate the attitude of Iranian nurses towards DNR orders and determine the role of religious sects in forming attitudes. In this descriptive-comparative study, 306 nurses from five hospitals affiliated to Tabriz University of Medical Sciences (TUOMS) in East Azerbaijan Province and three hospitals in Kurdistan province participated. Data were gathered by a survey design on attitudes on DNR orders. Data were analyzed using Statistical Package for Social Sciences (SPSS Inc., Chicago, IL) software examining descriptive and inferential statistics. Participants showed their willingness to learn more about DNR orders and highlights the importance of respecting patients and their families in DNR orders. In contrast, in many key items participants reported their negative attitude towards DNR orders. There were statistical differences in two items between the attitude of Shiite and Sunni nurses. Iranian nurses, regardless of their religious sects, reported negative attitude towards many aspects of DNR orders. It may be possible to change the attitude of Iranian nurses towards DNR through education.
Kiekkas, Panagiotis; Panagiotarou, Aliki; Malja, Alvaro; Tahirai, Daniela; Zykai, Rountina; Bakalis, Nick; Stefanopoulos, Nikolaos
2015-12-01
Although statistical knowledge and skills are necessary for promoting evidence-based practice, health sciences students have expressed anxiety about statistics courses, which may hinder their learning of statistical concepts. To evaluate the effects of a biostatistics course on nursing students' attitudes toward statistics and to explore the association between these attitudes and their performance in the course examination. One-group quasi-experimental pre-test/post-test design. Undergraduate nursing students of the fifth or higher semester of studies, who attended a biostatistics course. Participants were asked to complete the pre-test and post-test forms of The Survey of Attitudes Toward Statistics (SATS)-36 scale at the beginning and end of the course respectively. Pre-test and post-test scale scores were compared, while correlations between post-test scores and participants' examination performance were estimated. Among 156 participants, post-test scores of the overall SATS-36 scale and of the Affect, Cognitive Competence, Interest and Effort components were significantly higher than pre-test ones, indicating that the course was followed by more positive attitudes toward statistics. Among 104 students who participated in the examination, higher post-test scores of the overall SATS-36 scale and of the Affect, Difficulty, Interest and Effort components were significantly but weakly correlated with higher examination performance. Students' attitudes toward statistics can be improved through appropriate biostatistics courses, while positive attitudes contribute to higher course achievements and possibly to improved statistical skills in later professional life. Copyright © 2015 Elsevier Ltd. All rights reserved.
Statistical Cost Estimation in Higher Education: Some Alternatives.
ERIC Educational Resources Information Center
Brinkman, Paul T.; Niwa, Shelley
Recent developments in econometrics that are relevant to the task of estimating costs in higher education are reviewed. The relative effectiveness of alternative statistical procedures for estimating costs are also tested. Statistical cost estimation involves three basic parts: a model, a data set, and an estimation procedure. Actual data are used…
NASA Astrophysics Data System (ADS)
Xue, Lixia; Dai, Yun; Rao, Xuejun; Wang, Cheng; Hu, Yiyun; Liu, Qian; Jiang, Wenhan
2008-01-01
Higher-order aberrations correction can improve visual performance of human eye to some extent. To evaluate how much visual benefit can be obtained with higher-order aberrations correction we developed an adaptive optics vision simulator (AOVS). Dynamic real time optimized modal compensation was used to implement various customized higher-order ocular aberrations correction strategies. The experimental results indicate that higher-order aberrations correction can improve visual performance of human eye comparing with only lower-order aberration correction but the improvement degree and higher-order aberration correction strategy are different from each individual. Some subjects can acquire great visual benefit when higher-order aberrations were corrected but some subjects acquire little visual benefit even though all higher-order aberrations were corrected. Therefore, relative to general lower-order aberrations correction strategy, customized higher-order aberrations correction strategy is needed to obtain optimal visual improvement for each individual. AOVS provides an effective tool for higher-order ocular aberrations optometry for customized ocular aberrations correction.
Simulated performance of an order statistic threshold strategy for detection of narrowband signals
NASA Technical Reports Server (NTRS)
Satorius, E.; Brady, R.; Deich, W.; Gulkis, S.; Olsen, E.
1988-01-01
The application of order statistics to signal detection is becoming an increasingly active area of research. This is due to the inherent robustness of rank estimators in the presence of large outliers that would significantly degrade more conventional mean-level-based detection systems. A detection strategy is presented in which the threshold estimate is obtained using order statistics. The performance of this algorithm in the presence of simulated interference and broadband noise is evaluated. In this way, the robustness of the proposed strategy in the presence of the interference can be fully assessed as a function of the interference, noise, and detector parameters.
Reassessing the Link Between Premarital Cohabitation and Marital Instability
REINHOLD, STEFFEN
2010-01-01
Premarital cohabitation has been found to be positively correlated with the likelihood of marital dissolution in the United States. To reassess this link, I estimate proportional hazard models of marital dissolution for first marriages by using pooled data from the 1988, 1995, and 2002 surveys of the National Survey of Family Growth (NSFG). These results suggest that the positive relationship between premarital cohabitation and marital instability has weakened for more recent birth and marriage cohorts. Using multiple marital outcomes for a person to account for one source of unobserved heterogeneity, panel models suggest that cohabitation is not selective of individuals with higher risk of marital dissolution and may be a stabilizing factor for higher-order marriages. Further research with more recent data is needed to assess whether these results are statistical artifacts caused by data weaknesses in the NSFG. PMID:20879685
EEG artifact removal-state-of-the-art and guidelines.
Urigüen, Jose Antonio; Garcia-Zapirain, Begoña
2015-06-01
This paper presents an extensive review on the artifact removal algorithms used to remove the main sources of interference encountered in the electroencephalogram (EEG), specifically ocular, muscular and cardiac artifacts. We first introduce background knowledge on the characteristics of EEG activity, of the artifacts and of the EEG measurement model. Then, we present algorithms commonly employed in the literature and describe their key features. Lastly, principally on the basis of the results provided by various researchers, but also supported by our own experience, we compare the state-of-the-art methods in terms of reported performance, and provide guidelines on how to choose a suitable artifact removal algorithm for a given scenario. With this review we have concluded that, without prior knowledge of the recorded EEG signal or the contaminants, the safest approach is to correct the measured EEG using independent component analysis-to be precise, an algorithm based on second-order statistics such as second-order blind identification (SOBI). Other effective alternatives include extended information maximization (InfoMax) and an adaptive mixture of independent component analyzers (AMICA), based on higher order statistics. All of these algorithms have proved particularly effective with simulations and, more importantly, with data collected in controlled recording conditions. Moreover, whenever prior knowledge is available, then a constrained form of the chosen method should be used in order to incorporate such additional information. Finally, since which algorithm is the best performing is highly dependent on the type of the EEG signal, the artifacts and the signal to contaminant ratio, we believe that the optimal method for removing artifacts from the EEG consists in combining more than one algorithm to correct the signal using multiple processing stages, even though this is an option largely unexplored by researchers in the area.
ERIC Educational Resources Information Center
Higher Education Funding Council for England, 2016
2016-01-01
The Higher Education Funding Council for England (HEFCE) asked the Higher Education Statistics Agency (HESA) and the Higher Education Academy (HEA) to undertake research to explore the current issues around academic teaching qualifications in the HESA Staff record and to offer recommendations to improve data quality and coverage in future…
NASA Astrophysics Data System (ADS)
Rubtsov, Vladimir; Kapralov, Sergey; Chalyk, Iuri; Ulianova, Onega; Ulyanov, Sergey
2013-02-01
Statistical properties of laser speckles, formed in skin and mucous of colon have been analyzed and compared. It has been demonstrated that first and second order statistics of "skin" speckles and "mucous" speckles are quite different. It is shown that speckles, formed in mucous, are not Gaussian one. Layered structure of colon mucous causes formation of speckled biospeckles. First- and second- order statistics of speckled speckles have been reviewed in this paper. Statistical properties of Fresnel and Fraunhofer doubly scattered and cascade speckles are described. Non-gaussian statistics of biospeckles may lead to high localization of intensity of coherent light in human tissue during the laser surgery. Way of suppression of highly localized non-gaussian speckles is suggested.
Two-Dimensional Hermite Filters Simplify the Description of High-Order Statistics of Natural Images.
Hu, Qin; Victor, Jonathan D
2016-09-01
Natural image statistics play a crucial role in shaping biological visual systems, understanding their function and design principles, and designing effective computer-vision algorithms. High-order statistics are critical for conveying local features, but they are challenging to study - largely because their number and variety is large. Here, via the use of two-dimensional Hermite (TDH) functions, we identify a covert symmetry in high-order statistics of natural images that simplifies this task. This emerges from the structure of TDH functions, which are an orthogonal set of functions that are organized into a hierarchy of ranks. Specifically, we find that the shape (skewness and kurtosis) of the distribution of filter coefficients depends only on the projection of the function onto a 1-dimensional subspace specific to each rank. The characterization of natural image statistics provided by TDH filter coefficients reflects both their phase and amplitude structure, and we suggest an intuitive interpretation for the special subspace within each rank.
NASA Astrophysics Data System (ADS)
Erfanifard, Y.; Rezayan, F.
2014-10-01
Vegetation heterogeneity biases second-order summary statistics, e.g., Ripley's K-function, applied for spatial pattern analysis in ecology. Second-order investigation based on Ripley's K-function and related statistics (i.e., L- and pair correlation function g) is widely used in ecology to develop hypothesis on underlying processes by characterizing spatial patterns of vegetation. The aim of this study was to demonstrate effects of underlying heterogeneity of wild pistachio (Pistacia atlantica Desf.) trees on the second-order summary statistics of point pattern analysis in a part of Zagros woodlands, Iran. The spatial distribution of 431 wild pistachio trees was accurately mapped in a 40 ha stand in the Wild Pistachio & Almond Research Site, Fars province, Iran. Three commonly used second-order summary statistics (i.e., K-, L-, and g-functions) were applied to analyse their spatial pattern. The two-sample Kolmogorov-Smirnov goodness-of-fit test showed that the observed pattern significantly followed an inhomogeneous Poisson process null model in the study region. The results also showed that heterogeneous pattern of wild pistachio trees biased the homogeneous form of K-, L-, and g-functions, demonstrating a stronger aggregation of the trees at the scales of 0-50 m than actually existed and an aggregation at scales of 150-200 m, while regularly distributed. Consequently, we showed that heterogeneity of point patterns may bias the results of homogeneous second-order summary statistics and we also suggested applying inhomogeneous summary statistics with related null models for spatial pattern analysis of heterogeneous vegetations.
The frequency of dyscalculia among primary school children.
Jovanović, Gordana; Jovanović, Zoran; Banković-Gajić, Jelena; Nikolić, Anđelka; Svetozarević, Srđana; Ignjatović-Ristić, Dragana
2013-06-01
Formal education, daily living activities and jobs require knowledge and application skills of counting and simple mathematical operations. Problems with mathematics start in primary school and persist till adulthood. This is known as dyscalculia and its prevalence in the school population ranges from 3 to 6.5%. The study included 1424 third-grade students (aged 9-10) of all primary schools in the City of Kragujevac, Serbia. Tests in mathematics were given in order to determine their mathematical achievement. 1078 students (538 boys and 540 girls) completed all five tests. The frequency of dyscalculia in the sample was 9.9%. The difference between boys and girls according to the total score on the test was statistically significant (p<0.005). The difference between students according to their school achievement (excellent, very good, good, sufficient and insufficient) was statistically significant for all tests (p<0.0005). The influence of place of residence/school was significant for all tests (p<0.0005). Independent prognostic variables associated with dyscalculia are marks in mathematics and Serbian language. Frequency of dyscalculia of 9.9% in the sample is higher than in the other similar studies. Further research should identify possible causes of such frequency of dyscalculia in order to improve students` mathematical abilities.
NASA Astrophysics Data System (ADS)
Spicher, A.; Miloch, W.; Moen, J. I.; Clausen, L. B. N.
2015-12-01
Small-scale plasma irregularities and turbulence are common phenomena in the F layer of the ionosphere, both in the equatorial and polar regions. A common approach in analyzing data from experiments on space and ionospheric plasma irregularities are power spectra. Power spectra give no information about the phases of the waveforms, and thus do not allow to determine whether some of the phases are correlated or whether they exhibit a random character. The former case would imply the presence of nonlinear wave-wave interactions, while the latter suggests a more turbulent-like process. Discerning between these mechanisms is crucial for understanding high latitude plasma irregularities and can be addressed with bispectral analysis and higher order statistics. In this study, we use higher order spectra and statistics to analyze electron density data observed with the ICI-2 sounding rocket experiment at a meter-scale resolution. The main objective of ICI-2 was to investigate plasma irregularities in the cusp in the F layer ionosphere. We study in detail two regions intersected during the rocket flight and which are characterized by large density fluctuations: a trailing edge of a cold polar cap patch, and a density enhancement subject to cusp auroral particle precipitation. While these two regions exhibit similar power spectra, our analysis reveals that their internal structure is different. The structures on the edge of the polar cap patch are characterized by significant coherent mode coupling and intermittency, while the plasma enhancement associated with precipitation exhibits stronger random characteristics. This indicates that particle precipitation may play a fundamental role in ionospheric plasma structuring by creating turbulent-like structures.
Mechanical properties of commercial high strength ceramic core materials.
Rizkalla, A S; Jones, D W
2004-02-01
The objective of the present study is to evaluate and compare the flexural strength, dynamic elastic moduli and true hardness (H(o)) values of commercial Vita In-Ceram alumina core and Vita In-Ceram matrix glass with the standard aluminous porcelain (Hi-Ceram and Vitadur), Vitadur N and Dicor glass and glass-ceramic. The flexural strength was evaluated (n=5) using 3-point loading and a servo hydraulic Instron testing machine at a cross head speed of 0.5 mm/min. The density of the specimens (n=3) was measured by means of the water displacement technique. Dynamic Young's shear and bulk moduli and Poisson's ratio (n=3) were measured using a non-destructive ultrasonic technique using 10 MHz lithium niobate crystals. The true hardness (n=3) was measured using a Knoop indenter and the fracture toughness (n=3) was determined using a Vickers indenter and a Tukon hardness tester. Statistical analysis of the data was conducted using ANOVA and a Student-Newman-Keuls (SNK) rank order multiple comparative test. The SNK rank order test analysis of the mean flexural strength was able to separate five commercial core materials into three significant groups at p=0.05. Vita In-Ceram alumina and IPS Empress 2 exhibited significantly higher flexural strength than aluminous porcelains and IPS Empress at p=0.05. The dynamic elastic moduli and true hardness of Vita In-Ceram alumina core were significantly higher than the rest of the commercial ceramic core materials at p=0.05. The ultrasonic test method is a valuable mechanical characterization tool and was able to statistically discriminate between the chemical and structural differences within dental ceramic materials. Significant correlation was obtained between the dynamic Young's modulus and true hardness, p=0.05.
A fixed mass method for the Kramers-Moyal expansion--application to time series with outliers.
Petelczyc, M; Żebrowski, J J; Orłowska-Baranowska, E
2015-03-01
Extraction of stochastic and deterministic components from empirical data-necessary for the reconstruction of the dynamics of the system-is discussed. We determine both components using the Kramers-Moyal expansion. In our earlier papers, we obtained large fluctuations in the magnitude of both terms for rare or extreme valued events in the data. Calculations for such events are burdened by an unsatisfactory quality of the statistics. In general, the method is sensitive to the binning procedure applied for the construction of histograms. Instead of the commonly used constant width of bins, we use here a constant number of counts for each bin. This approach-the fixed mass method-allows to include in the calculation events, which do not yield satisfactory statistics in the fixed bin width method. The method developed is general. To demonstrate its properties, here, we present the modified Kramers-Moyal expansion method and discuss its properties by the application of the fixed mass method to four representative heart rate variability recordings with different numbers of ectopic beats. These beats may be rare events as well as outlying, i.e., very small or very large heart cycle lengths. The properties of ectopic beats are important not only for medical diagnostic purposes but the occurrence of ectopic beats is a general example of the kind of variability that occurs in a signal with outliers. To show that the method is general, we also present results for two examples of data from very different areas of science: daily temperatures at a large European city and recordings of traffics on a highway. Using the fixed mass method, to assess the dynamics leading to the outlying events we studied the occurrence of higher order terms of the Kramers-Moyal expansion in the recordings. We found that the higher order terms of the Kramers-Moyal expansion are negligible for heart rate variability. This finding opens the possibility of the application of the Langevin equation to the whole range of empirical signals containing rare or outlying events. Note, however, that the higher order terms are non-negligible for the other data studied here and for it the Langevin equation is not applicable as a model.
A fixed mass method for the Kramers-Moyal expansion—Application to time series with outliers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petelczyc, M.; Żebrowski, J. J.; Orłowska-Baranowska, E.
2015-03-15
Extraction of stochastic and deterministic components from empirical data—necessary for the reconstruction of the dynamics of the system—is discussed. We determine both components using the Kramers-Moyal expansion. In our earlier papers, we obtained large fluctuations in the magnitude of both terms for rare or extreme valued events in the data. Calculations for such events are burdened by an unsatisfactory quality of the statistics. In general, the method is sensitive to the binning procedure applied for the construction of histograms. Instead of the commonly used constant width of bins, we use here a constant number of counts for each bin. Thismore » approach—the fixed mass method—allows to include in the calculation events, which do not yield satisfactory statistics in the fixed bin width method. The method developed is general. To demonstrate its properties, here, we present the modified Kramers-Moyal expansion method and discuss its properties by the application of the fixed mass method to four representative heart rate variability recordings with different numbers of ectopic beats. These beats may be rare events as well as outlying, i.e., very small or very large heart cycle lengths. The properties of ectopic beats are important not only for medical diagnostic purposes but the occurrence of ectopic beats is a general example of the kind of variability that occurs in a signal with outliers. To show that the method is general, we also present results for two examples of data from very different areas of science: daily temperatures at a large European city and recordings of traffics on a highway. Using the fixed mass method, to assess the dynamics leading to the outlying events we studied the occurrence of higher order terms of the Kramers-Moyal expansion in the recordings. We found that the higher order terms of the Kramers-Moyal expansion are negligible for heart rate variability. This finding opens the possibility of the application of the Langevin equation to the whole range of empirical signals containing rare or outlying events. Note, however, that the higher order terms are non-negligible for the other data studied here and for it the Langevin equation is not applicable as a model.« less
Verbeek, Lianne; Zhao, Depeng P; Te Pas, Arjan B; Middeldorp, Johanna M; Hooper, Stuart B; Oepkes, Dick; Lopriore, Enrico
2016-06-01
To determine the differences in hemoglobin (Hb) levels in the first 2 days after birth in uncomplicated monochorionic twins in relation to birth order and mode of delivery. All consecutive uncomplicated monochorionic pregnancies with two live-born twins delivered at our center were included in this retrospective study. We recorded Hb levels at birth and on day 2, and analyzed Hb levels in association with birth order, mode of delivery, and time interval between delivery of twin 1 and 2. A total of 290 monochorionic twin pairs were analyzed, including 171 (59%) twins delivered vaginally and 119 (41%) twins born by cesarean section (CS). In twins delivered vaginally, mean Hb levels at birth and on day 2 were significantly higher in second-born twins compared to first-born twins: 17.8 versus 16.1 g/dL and 18.0 versus 14.8 g/dL, respectively (p < .01). Polycythemia was detected more often in second-born twins (12%, 20/166) compared to first-born twins (1%, 2/166; p < .01). Hb differences within twin pairs delivered by CS were not statistically or clinically significant. We found no association between inter-twin delivery time intervals and Hb differences. Second-born twins after vaginal delivery have higher Hb levels and more often polycythemia than their co-twin, but not when born by CS.
NASA Astrophysics Data System (ADS)
Brazhnik, Olga D.; Freed, Karl F.
1996-07-01
The lattice cluster theory (LCT) is extended to enable inclusion of longer range correlation contributions to the partition function of lattice model polymers in the athermal limit. A diagrammatic technique represents the expansion of the partition function in powers of the inverse lattice coordination number. Graph theory is applied to sort, classify, and evaluate the numerous diagrams appearing in higher orders. New general theorems are proven that provide a significant reduction in the computational labor required to evaluate the contributions from higher order correlations. The new algorithm efficiently generates the correction to the Flory mean field approximation from as many as eight sterically interacting bonds. While the new results contain the essential ingredients for treating a system of flexible chains with arbitrary lengths and concentrations, the complexity of our new algorithm motivates us to test the theory here for the simplest case of a system of lattice dimers by comparison to the dimer packing entropies from the work of Gaunt. This comparison demonstrates that the eight bond LCT is exact through order φ5 for dimers in one through three dimensions, where φ is the volume fraction of dimers. A subsequent work will use the contracted diagrams, derived and tested here, to treat the packing entropy for a system of flexible N-mers at a volume fraction of φ on hypercubic lattices.
Statistical Abstract of Tennessee Higher Education, 1982-1983.
ERIC Educational Resources Information Center
Tennessee Higher Education Commission, Nashville.
Statistics are presented on higher education in Tennessee for 1982-1983 and previous years. Attention is directed to: enrollment trends, undergraduate transfers, student finances, degrees conferred, faculty salaries, institutional finances, and actions of the Tennessee Higher Education Commission. Tables include: student headcount enrollment by…
Statistical Abstract of Tennessee Higher Education, 1984-85.
ERIC Educational Resources Information Center
Tennessee Higher Education Commission, Nashville.
Statistics are presented on higher education in Tennessee for 1984-1985 and previous years. Attention is directed to: enrollment trends, undergraduate transfers, student finances, degrees conferred, faculty salaries, institutional finances, and actions of the Tennessee Higher Education Commission. Tables include: student headcount enrollment by…
Some limit theorems for ratios of order statistics from uniform random variables.
Xu, Shou-Fang; Miao, Yu
2017-01-01
In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.
Schlomann, Brandon H
2018-06-06
A central problem in population ecology is understanding the consequences of stochastic fluctuations. Analytically tractable models with Gaussian driving noise have led to important, general insights, but they fail to capture rare, catastrophic events, which are increasingly observed at scales ranging from global fisheries to intestinal microbiota. Due to mathematical challenges, growth processes with random catastrophes are less well characterized and it remains unclear how their consequences differ from those of Gaussian processes. In the face of a changing climate and predicted increases in ecological catastrophes, as well as increased interest in harnessing microbes for therapeutics, these processes have never been more relevant. To better understand them, I revisit here a differential equation model of logistic growth coupled to density-independent catastrophes that arrive as a Poisson process, and derive new analytic results that reveal its statistical structure. First, I derive exact expressions for the model's stationary moments, revealing a single effective catastrophe parameter that largely controls low order statistics. Then, I use weak convergence theorems to construct its Gaussian analog in a limit of frequent, small catastrophes, keeping the stationary population mean constant for normalization. Numerically computing statistics along this limit shows how they transform as the dynamics shifts from catastrophes to diffusions, enabling quantitative comparisons. For example, the mean time to extinction increases monotonically by orders of magnitude, demonstrating significantly higher extinction risk under catastrophes than under diffusions. Together, these results provide insight into a wide range of stochastic dynamical systems important for ecology and conservation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cantarero, Samuel; Zafra-Gómez, Alberto; Ballesteros, Oscar; Navalón, Alberto; Reis, Marco S; Saraiva, Pedro M; Vílchez, José L
2011-01-01
In this work we present a monitoring study of linear alkylbenzene sulfonates (LAS) and insoluble soap performed on Spanish sewage sludge samples. This work focuses on finding statistical relations between LAS concentrations and insoluble soap in sewage sludge samples and variables related to wastewater treatment plants such as water hardness, population and treatment type. It is worth to mention that 38 samples, collected from different Spanish regions, were studied. The statistical tool we used was Principal Component Analysis (PC), in order to reduce the number of response variables. The analysis of variance (ANOVA) test and a non-parametric test such as the Kruskal-Wallis test were also studied through the estimation of the p-value (probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true) in order to study possible relations between the concentration of both analytes and the rest of variables. We also compared LAS and insoluble soap behaviors. In addition, the results obtained for LAS (mean value) were compared with the limit value proposed by the future Directive entitled "Working Document on Sludge". According to the results, the mean obtained for soap and LAS was 26.49 g kg(-1) and 6.15 g kg(-1) respectively. It is worth noting that LAS mean was significantly higher than the limit value (2.6 g kg(-1)). In addition, LAS and soap concentrations depend largely on water hardness. However, only LAS concentration depends on treatment type.
South Carolina Higher Education Statistical Abstract, 2014. 36th Edition
ERIC Educational Resources Information Center
Armour, Mim, Ed.
2014-01-01
The South Carolina Higher Education Statistical Abstract is a comprehensive, single-source compilation of tables and graphs which report data frequently requested by the Governor, Legislators, college and university staff, other state government officials, and the general public. The 2014 edition of the Statistical Abstract marks the 36th year of…
South Carolina Higher Education Statistical Abstract, 2015. 37th Edition
ERIC Educational Resources Information Center
Armour, Mim, Ed.
2015-01-01
The South Carolina Higher Education Statistical Abstract is a comprehensive, single-source compilation of tables and graphs which report data frequently requested by the Governor, Legislators, college and university staff, other state government officials, and the general public. The 2015 edition of the Statistical Abstract marks the 37th year of…
Statistics Report on TEQSA Registered Higher Education Providers, 2016
ERIC Educational Resources Information Center
Australian Government Tertiary Education Quality and Standards Agency, 2016
2016-01-01
This Statistics Report is the third release of selected higher education sector data held by the Australian Government Tertiary Education Quality and Standards Agency (TEQSA) for its quality assurance activities. It provides a snapshot of national statistics on all parts of the sector by bringing together data collected directly by TEQSA with data…
Navarro, Manuel Carmen; Sosa, Manuel; Saavedra, Pedro; Gil-Antullano, Santiago Palacios; Castro, Rosa; Bonet, Mario; Travesí, Isabel; de Miguel, Emilio
2010-03-01
Less advantaged social classes usually have unhealthier lifestyles and have more difficult access to health resources. In this work we study the possible association between poverty and the prevalence of obesity and oophorectomy in a population of postmenopausal women. Cross-sectional observational study. To study in a population of postmenopausal women in poverty the possible differences in the prevalence of obesity and oophorectomy, and to compare some other gynaecological data: age at menarche, age at menopause, fertile years, number of pregnancies, breastfeeding and the use of hormonal replacement therapy (HRT). All patients were interviewed personally. A questionnaire was used to find out about their lifestyles and the medication they were taking. Their medical records were reviewed to confirm the existence of some diseases. A complete physical examination was performed with every patient. Weight and height were measured with the patient dressed in light clothes. Blood was obtained in a fasting state in order to carry out some analyses. Poverty was defined according to the Spanish National Institute of Statistics criteria. We enrolled 1225 postmenopausal women; 449 (36.6%) were under the threshold of poverty, defined by the Spanish National Institute of Statistics. Postmenopausal women in poverty had higher body mass index (29.2 +/- 4.8 versus 27.0 +/- 4.7 kg/m(2) P < 0.001), and a higher prevalence of obesity than postmenopausal women not in poverty (44.2% versus 24.3%, P = 0.001). The prevalence of oophorectomy was also higher in women in poverty (32.7% versus 27.2%, P < 0.04). Women in poverty had had a greater number of pregnancies (3 versus 2, P = 0.001). They also showed a higher rate of breastfeeding than women in medium and high social classes (65% versus 59%, P = 0.037). There were no statistically significant differences between the groups in either the age of menopause or fertile years, nor in the use of HRT. Postmenopausal women in poverty have higher levels of obesity, and also a greater prevalence of oophorectomy than women of medium and high social classes. They also presented a higher rate of breastfeeding and a greater number of pregnancies than those women not in poverty.
Fetit, Ahmed E; Novak, Jan; Peet, Andrew C; Arvanitits, Theodoros N
2015-09-01
The aim of this study was to assess the efficacy of three-dimensional texture analysis (3D TA) of conventional MR images for the classification of childhood brain tumours in a quantitative manner. The dataset comprised pre-contrast T1 - and T2-weighted MRI series obtained from 48 children diagnosed with brain tumours (medulloblastoma, pilocytic astrocytoma and ependymoma). 3D and 2D TA were carried out on the images using first-, second- and higher order statistical methods. Six supervised classification algorithms were trained with the most influential 3D and 2D textural features, and their performances in the classification of tumour types, using the two feature sets, were compared. Model validation was carried out using the leave-one-out cross-validation (LOOCV) approach, as well as stratified 10-fold cross-validation, in order to provide additional reassurance. McNemar's test was used to test the statistical significance of any improvements demonstrated by 3D-trained classifiers. Supervised learning models trained with 3D textural features showed improved classification performances to those trained with conventional 2D features. For instance, a neural network classifier showed 12% improvement in area under the receiver operator characteristics curve (AUC) and 19% in overall classification accuracy. These improvements were statistically significant for four of the tested classifiers, as per McNemar's tests. This study shows that 3D textural features extracted from conventional T1 - and T2-weighted images can improve the diagnostic classification of childhood brain tumours. Long-term benefits of accurate, yet non-invasive, diagnostic aids include a reduction in surgical procedures, improvement in surgical and therapy planning, and support of discussions with patients' families. It remains necessary, however, to extend the analysis to a multicentre cohort in order to assess the scalability of the techniques used. Copyright © 2015 John Wiley & Sons, Ltd.
Ding, Jiao; Jiang, Yuan; Liu, Qi; Hou, Zhaojiang; Liao, Jianyu; Fu, Lan; Peng, Qiuzhi
2016-05-01
Understanding the relationships between land use patterns and water quality in low-order streams is useful for effective landscape planning to protect downstream water quality. A clear understanding of these relationships remains elusive due to the heterogeneity of land use patterns and scale effects. To better assess land use influences, we developed empirical models relating land use patterns to the water quality of low-order streams at different geomorphic regions across multi-scales in the Dongjiang River basin using multivariate statistical analyses. The land use pattern was quantified in terms of the composition, configuration and hydrological distance of land use types at the reach buffer, riparian corridor and catchment scales. Water was sampled under summer base flow at 56 low-order catchments, which were classified into two homogenous geomorphic groups. The results indicated that the water quality of low-order streams was most strongly affected by the configuration metrics of land use. Poorer water quality was associated with higher patch densities of cropland, orchards and grassland in the mountain catchments, whereas it was associated with a higher value for the largest patch index of urban land use in the plain catchments. The overall water quality variation was explained better by catchment scale than by riparian- or reach-scale land use, whereas the spatial scale over which land use influenced water quality also varied across specific water parameters and the geomorphic basis. Our study suggests that watershed management should adopt better landscape planning and multi-scale measures to improve water quality. Copyright © 2016 Elsevier B.V. All rights reserved.
Development of high performance particle in cell code for the exascale age
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Amaya, Jorge; Gonzalez, Diego; Deep-Est H2020 Consortium Collaboration
2017-10-01
Magnetized plasmas are most effectively described by magneto-hydrodynamics, MHD, a fluid theory based on describing some fields defined in space: electromagnetic fields, density, velocity and temperature of the plasma. However, microphysics processes need kinetic theory, where statistical distributions of particles are governed by the Boltzmann equation. While fluid models are based on the ordinary space and time, kinetic models require a six dimensional space, called phase space, besides time. The two methods are not separated but rather interact to determine the system evolution. Arriving at a single self-consistent model is the goal of our research. We present a new approach developed with the goal of extending the reach of kinetic models to the fluid scales. Kinetic models are a higher order description and all fluid effects are included in them. However, the cost in terms of computing power is much higher and it has been so far prohibitively expensive to treat space weather events fully kinetically. We have now designed a new method capable of reducing that cost by several orders of magnitude making it possible for kinetic models to study macroscopic systems. H2020 Deep-EST consortium (European Commission).
NASA Astrophysics Data System (ADS)
Gruneisen, Mark T.; Sickmiller, Brett A.; Flanagan, Michael B.; Black, James P.; Stoltenberg, Kurt E.; Duchane, Alexander W.
2016-02-01
Spatial filtering is an important technique for reducing sky background noise in a satellite quantum key distribution downlink receiver. Atmospheric turbulence limits the extent to which spatial filtering can reduce sky noise without introducing signal losses. Using atmospheric propagation and compensation simulations, the potential benefit of adaptive optics (AO) to secure key generation (SKG) is quantified. Simulations are performed assuming optical propagation from a low-Earth-orbit satellite to a terrestrial receiver that includes AO. Higher-order AO correction is modeled assuming a Shack-Hartmann wavefront sensor and a continuous-face-sheet deformable mirror. The effects of atmospheric turbulence, tracking, and higher-order AO on the photon capture efficiency are simulated using statistical representations of turbulence and a time-domain wave-optics hardware emulator. SKG rates are calculated for a decoy-state protocol as a function of the receiver field of view for various strengths of turbulence, sky radiances, and pointing angles. The results show that at fields of view smaller than those discussed by others, AO technologies can enhance SKG rates in daylight and enable SKG where it would otherwise be prohibited as a consequence of background optical noise and signal loss due to propagation and turbulence effects.
Convergence properties of η → 3π decays in chiral perturbation theory
NASA Astrophysics Data System (ADS)
Kolesár, Marián; Novotný, Jiří
2017-01-01
The convergence of the decay widths and some of the Dalitz plot parameters of the decay η → 3π seems problematic in low energy QCD. In the framework of resummed chiral perturbation theory, we explore the question of compatibility of experimental data with a reasonable convergence of a carefully defined chiral series. By treating the uncertainties in the higher orders statistically, we numerically generate a large set of theoretical predictions, which are then confronted with experimental information. In the case of the decay widths, the experimental values can be reconstructed for a reasonable range of the free parameters and thus no tension is observed, in spite of what some of the traditional calculations suggest. The Dalitz plot parameters a and d can be described very well too. When the parameters b and α are concerned, we find a mild tension for the whole range of the free parameters, at less than 2σ C.L. This can be interpreted in two ways - either some of the higher order corrections are indeed unexpectedly large or there is a specific configuration of the remainders, which is, however, not completely improbable.
TADs are 3D structural units of higher-order chromosome organization in Drosophila
Szabo, Quentin; Jost, Daniel; Chang, Jia-Ming; Cattoni, Diego I.; Papadopoulos, Giorgio L.; Bonev, Boyan; Sexton, Tom; Gurgo, Julian; Jacquier, Caroline; Nollmann, Marcelo; Bantignies, Frédéric; Cavalli, Giacomo
2018-01-01
Deciphering the rules of genome folding in the cell nucleus is essential to understand its functions. Recent chromosome conformation capture (Hi-C) studies have revealed that the genome is partitioned into topologically associating domains (TADs), which demarcate functional epigenetic domains defined by combinations of specific chromatin marks. However, whether TADs are true physical units in each cell nucleus or whether they reflect statistical frequencies of measured interactions within cell populations is unclear. Using a combination of Hi-C, three-dimensional (3D) fluorescent in situ hybridization, super-resolution microscopy, and polymer modeling, we provide an integrative view of chromatin folding in Drosophila. We observed that repressed TADs form a succession of discrete nanocompartments, interspersed by less condensed active regions. Single-cell analysis revealed a consistent TAD-based physical compartmentalization of the chromatin fiber, with some degree of heterogeneity in intra-TAD conformations and in cis and trans inter-TAD contact events. These results indicate that TADs are fundamental 3D genome units that engage in dynamic higher-order inter-TAD connections. This domain-based architecture is likely to play a major role in regulatory transactions during DNA-dependent processes. PMID:29503869
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krelowski, J.; Galazutdinov, G.; Kolos, R., E-mail: jacek@astri.uni.torun.pl, E-mail: runizag@gmail.com, E-mail: robert.kolos@ichf.edu.pl
The recent assignment of two broad diffuse interstellar bands (DIBs) near 4882 and 5450 A to the propadienylidene (l-C{sub 3}H{sub 2}) molecule is examined using a statistically meaningful sample of targets. Our spectra clearly show that the strength ratio of two broad DIBs is strongly variable, contrary to what should be observed if both features are due to l-C{sub 3}H{sub 2}, since the proposed transitions are lifetime broadened and start from the same level. Moreover, even in directions where the 4882 DIB and 5450 DIB are strong, the third expected l-C{sub 3}H{sub 2} band, in the 5165-5185 A region, ismore » absent. Another puzzling characteristic of l-C{sub 3}H{sub 2} as the proposed carrier of both broad diffuse bands is its column density of several 10{sup 14} cm{sup -2}, inferred from the equivalent width of the 5450 DIB. This value is one order of magnitude higher than N(CH) toward the same objects and two to three orders of magnitude higher than N(H{sub 2}CCC), measured at radio frequencies in absorption, for comparable samples of the diffuse medium. We conclude that the proposed identification of broad DIBs is unjustified.« less
[The role of acoustic impedance test in the diagnosis for occupational noise induced deafness].
Chen, H; Xue, L J; Yang, A C; Liang, X Y; Chen, Z Q; Zheng, Q L
2018-01-20
Objective: To investigate the characteristics of acoustic impedance test and its diagnostic role for occupational noise induced deafness, in order to provide an objective basis for the differential diagnosis of occupational noise induced deafness. Methods: A retrospective study was conducted to investigate the cases on the diagnosis of occupational noise-induced deafness in Guangdong province hospital for occupational disease prevention and treatment from January 2016 to January 2017. A total of 198 cases (396 ears) were divided into occupation disease group and non occupation disease group based on the diagnostic criteria of occupational noise deafness in 2014 edition, acoustic conductivity test results of two groups were compared including tympanograms types, external auditory canal volume, tympanic pressure, static compliance and slope. Results: In the occupational disease group, 204 ears were found to have 187 ears (91.67%) of type A, which were significantly higher than those in the non occupational disease group 143/192 (74.48%) , the difference was statistically significant (χ(2)=21.038, P <0.01). Detection of Ad or As type, occupation disease group in other type were 16/204 (7.84%) , 3/204 (1.47%) , were lower than Ad or As type of occupation disease group (15.63%) , other type (9.38%) , the differences were statistically significant[ (χ(2)=5.834, P <0.05) , (χ(2)=12.306, P <0.01) ]. Occupation disease group canal volume average (1.68±0.39) ml higher than that of non occupation disease group (1.57 ± 0.47) ml, the difference was statistically significant ( t =2.756, P <0.01) ; occupation disease group mean static compliance (1.06±0.82) ml higher than that of non occupation disease group (0.89±0.64) ml. The difference was statistically singificant ( t =2.59, P <0.01) . Conclusion: We observed that acoustic impedance test had obvious auxiliary function in the differential diagnosis of occupational noise induced deafness, More than 90% of the confirmed cases showed an A-form tympanograms, it is one of the objective examination methods which can be used in the differential diagnosis of pseudo deafness.
Feitosa, Fernanda A; de Araújo, Rodrigo M; Tay, Franklin R; Niu, Lina; Pucci, César R
2017-12-12
The present study evaluated the effect of different high-power-laser surface treatments on the bond strength between resin cement and disilicate ceramic. Lithium disilicate ceramic specimens with truncated cones shape were prepared and divided into 5 groups: HF (hydrofluoric acid-etching), Er:YAG laser + HF, Graphite + Er:YAG laser + HF, Nd:YAG laser + HF, and Graphite + Nd:YAG laser + HF. The treated ceramic surfaces were characterized with scanning electron microscopy and surface roughness measurement. Hourglasses-shaped ceramic- resin bond specimens were prepared, thermomechanically cycled and stressed to failure under tension. The results showed that for both the factors "laser" and "graphite", statistically significant differences were observed (p < 0.05). Multiple-comparison tests performed on the "laser" factor were in the order: Er:YAG > Nd:YAG (p < 0.05), and on the "graphite" factor were in the order: graphite coating < without coating (p < 0.05). The Dunnett test showed that Er:YAG + HF had significantly higher tensile strength (p = 0.00). Higher surface roughness was achieved after Er:YAG laser treatment. Thus Er:YAG laser treatment produces higher bond strength to resin cement than other surface treatment protocols. Surface-coating with graphite does not improve bonding of the laser-treated lithium disilicate ceramic to resin cement.
Wong, Andrew C; Kowalenko, Terry; Roahen-Harrison, Stephanie; Smith, Barbara; Maio, Ronald F; Stanley, Rachel M
2011-03-01
The objective of the study was to determine whether fear of malpractice is associated with emergency physicians' decision to order head computed tomography (CT) in 3 age-specific scenarios of pediatric minor head trauma. We hypothesized that physicians with higher fear of malpractice scores will be more likely to order head CT scans. Board-eligible/board-certified members of the Michigan College of Emergency Physicians were sent a 2-part survey consisting of case scenarios and demographic questions. Effect of fear of malpractice on the decision to order a CT scan was evaluated using a cumulative logit model. Two hundred forty-six members (36.5%) completed the surveys. In scenario 1 (infant), being a male and working in a university setting were associated with reduced odds of ordering a CT scan (odds ratio [OR], 0.40; 95% confidence interval [CI], 0.18-0.88; and OR, 0.35; 95% CI, 0.13-0.96, respectively). In scenario 2 (toddler), working for 15 years or more, at multiple hospitals, and for a private group were associated with reduced odds of ordering a CT scan (OR, 0.46; 95% CI, 0.26-0.79; OR, 0.36; 95% CI, 0.16-0.80; and OR, 0.51; 95% CI, 0.27-0.94, respectively). No demographic variables were significantly associated with ordering a CT scan in scenario 3 (teen). Overall, the fear of malpractice was not significantly associated with ordering a CT scan (OR, 1.28; 95% CI, 0.73-2.26; and OR, 1.70; 95% CI, 0.97-3.0). Only in scenario 2 was high fear significantly associated with increased odds of ordering a CT scan (OR, 2.09; 95% CI, 1.08-4.05). Members of Michigan College of Emergency Physicians with a higher fear of malpractice score tended to order more head CT scans in pediatric minor head trauma. However, this trend was shown to be statistically significant only in 1 case and not overall.
Ramasamy, V; Paramasivam, K; Suresh, G; Jose, M T
2014-01-03
Using Gamma ray and Fourier Transform Infrared (FTIR) spectroscopic techniques, level of natural radioactivity ((238)U, (232)Th and (40)K) and mineralogical characterization of Vaigai River sediments have been analyzed with the view of evaluating the radiation risk and its relation to available minerals. Different radiological parameters are calculated to know the entire radiological characterization. The average of activity concentrations and all radiological parameters are lower than the recommended safety limit. However, some sites are having higher radioactivity values than the safety limit. From the FTIR spectroscopic technique, the minerals such as quartz, microcline feldspar, orthoclase feldspar, kaolinite, gibbsite, calcite, montmorillonite and organic carbon are identified and they are characterized. The extinction co-efficient values are calculated to know the relative distribution of major minerals such as quartz, microcline feldspar, orthoclase feldspar and kaolinite. The calculated values indicate that the amount of quartz is higher than orthoclase feldspar, microcline feldspar and much higher than kaolinite. Crystallinity index is calculated to know the crystalline nature of quartz and the result indicates that the presence of ordered crystalline quartz in the present sediment. The role of minerals in the level of radioactivity is assessed by multivariate statistical analysis (Pearson's correlation and Cluster analysis). The statistical analysis confirms that the clay mineral kaolinite is the major factor than other major minerals to induce the important radioactivity variables such as absorbed dose rate and concentrations of (232)Th and (238)U. Copyright © 2013 Elsevier B.V. All rights reserved.
Oravec, Daniel; Quazi, Abrar; Xiao, Angela; Yang, Ellen; Zauel, Roger; Flynn, Michael J; Yeni, Yener N
2015-12-01
Endplate morphology is understood to play an important role in the mechanical behavior of vertebral bone as well as degenerative processes in spinal tissues; however, the utility of clinical imaging modalities in assessment of the vertebral endplate has been limited. The objective of this study was to evaluate the ability of two clinical imaging modalities (digital tomosynthesis, DTS; high resolution computed tomography, HRCT) to assess endplate topography by correlating the measurements to a microcomputed tomography (μCT) standard. DTS, HRCT, and μCT images of 117 cadaveric thoracolumbar vertebrae (T10-L1; 23 male, 19 female; ages 36-100 years) were segmented, and inferior and superior endplate surface topographical distribution parameters were calculated. Both DTS and HRCT showed statistically significant correlations with μCT approaching a moderate level of correlation at the superior endplate for all measured parameters (R(2)Adj=0.19-0.57), including averages, variability, and higher order statistical moments. Correlation of average depths at the inferior endplate was comparable to the superior case for both DTS and HRCT (R(2)Adj=0.14-0.51), while correlations became weak or nonsignificant for higher moments of the topography distribution. DTS was able to capture variations in the endplate topography to a slightly better extent than HRCT, and taken together with the higher speed and lower radiation cost of DTS than HRCT, DTS appears preferable for endplate measurements. Copyright © 2015 Elsevier Inc. All rights reserved.
A new statistical methodology predicting chip failure probability considering electromigration
NASA Astrophysics Data System (ADS)
Sun, Ted
In this research thesis, we present a new approach to analyze chip reliability subject to electromigration (EM) whose fundamental causes and EM phenomenon happened in different materials are presented in this thesis. This new approach utilizes the statistical nature of EM failure in order to assess overall EM risk. It includes within-die temperature variations from the chip's temperature map extracted by an Electronic Design Automation (EDA) tool to estimate the failure probability of a design. Both the power estimation and thermal analysis are performed in the EDA flow. We first used the traditional EM approach to analyze the design with a single temperature across the entire chip that involves 6 metal and 5 via layers. Next, we used the same traditional approach but with a realistic temperature map. The traditional EM analysis approach and that coupled with a temperature map and the comparison between the results of considering and not considering temperature map are presented in in this research. A comparison between these two results confirms that using a temperature map yields a less pessimistic estimation of the chip's EM risk. Finally, we employed the statistical methodology we developed considering a temperature map and different use-condition voltages and frequencies to estimate the overall failure probability of the chip. The statistical model established considers the scaling work with the usage of traditional Black equation and four major conditions. The statistical result comparisons are within our expectations. The results of this statistical analysis confirm that the chip level failure probability is higher i) at higher use-condition frequencies for all use-condition voltages, and ii) when a single temperature instead of a temperature map across the chip is considered. In this thesis, I start with an overall review on current design types, common flows, and necessary verifications and reliability checking steps used in this IC design industry. Furthermore, the important concepts about "Scripting Automation" which is used in all the integration of using diversified EDA tools in this research work are also described in detail with several examples and my completed coding works are also put in the appendix for your reference. Hopefully, this construction of my thesis will give readers a thorough understanding about my research work from the automation of EDA tools to the statistical data generation, from the nature of EM to the statistical model construction, and the comparisons among the traditional EM analysis and the statistical EM analysis approaches.
NASA Astrophysics Data System (ADS)
Roy, Abhishek; Chen, Xiao; Teo, Jeffrey
2013-03-01
We investigate homological orders in two, three and four dimensions by studying Zk toric code models on simplicial, cellular or in general differential complexes. The ground state degeneracy is obtained from Wilson loop and surface operators, and the homological intersection form. We compute these for a series of closed 3 and 4 dimensional manifolds and study the projective representations of mapping class groups (modular transformations). Braiding statistics between point and string excitations in (3+1)-dimensions or between dual string excitations in (4+1)-dimensions are topologically determined by the higher dimensional linking number, and can be understood by an effective topological field theory. An algorithm for calculating entanglemnent entropy of any bipartition of closed manifolds is presented, and its topological signature is completely characterized homologically. Extrinsic twist defects (or disclinations) are studied in 2,3 and 4 dimensions and are shown to carry exotic fusion and braiding properties. Simons Fellowship
Hidden Markov model tracking of continuous gravitational waves from young supernova remnants
NASA Astrophysics Data System (ADS)
Sun, L.; Melatos, A.; Suvorova, S.; Moran, W.; Evans, R. J.
2018-02-01
Searches for persistent gravitational radiation from nonpulsating neutron stars in young supernova remnants are computationally challenging because of rapid stellar braking. We describe a practical, efficient, semicoherent search based on a hidden Markov model tracking scheme, solved by the Viterbi algorithm, combined with a maximum likelihood matched filter, the F statistic. The scheme is well suited to analyzing data from advanced detectors like the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO). It can track rapid phase evolution from secular stellar braking and stochastic timing noise torques simultaneously without searching second- and higher-order derivatives of the signal frequency, providing an economical alternative to stack-slide-based semicoherent algorithms. One implementation tracks the signal frequency alone. A second implementation tracks the signal frequency and its first time derivative. It improves the sensitivity by a factor of a few upon the first implementation, but the cost increases by 2 to 3 orders of magnitude.
Towards Petascale DNS of High Reynolds-Number Turbulent Boundary Layer
NASA Astrophysics Data System (ADS)
Webster, Keegan R.
In flight vehicles, a large portion of fuel consumption is due to skin-friction drag. Reduction of this drag will significantly reduce the fuel consumption of flight vehicles and help our nation to reduce CO 2 emissions. In order to reduce skin-friction drag, an increased understanding of wall-turbulence is needed. Direct numerical simulation (DNS) of spatially developing turbulent boundary layers (SDTBL) can provide the fundamental understanding of wall-turbulence in order to produce models for Reynolds averaged Navier-Stokes (RANS) and large-eddy simulations (LES). DNS of SDTBL over a flat plate at Retheta = 1430 - 2900 were performed. Improvements were made to the DNS code allowing for higher Reynolds number simulations towards petascale DNS of turbulent boundary layers. Mesh refinement and improvements to the inflow and outflow boundary conditions have resulted in turbulence statistics that match more closely to experimental results. The Reynolds stresses and the terms of their evolution equations are reported.
Bs and Ds decay constants in three-flavor lattice QCD.
Wingate, Matthew; Davies, Christine T H; Gray, Alan; Lepage, G Peter; Shigemitsu, Junko
2004-04-23
Capitalizing on recent advances in lattice QCD, we present a calculation of the leptonic decay constants f(B(s)) and f(D(s)) that includes effects of one strange sea quark and two light sea quarks via an improved staggered action. By shedding the quenched approximation and the associated lattice scale uncertainty, lattice QCD greatly increases its predictive power. Nonrelativistic QCD is used to simulate heavy quarks with masses between 1.5m(c) and m(b). We arrive at the following results: f(B(s))=260+/-7+/-26+/-8+/-5 and f(D(s))=290+/-20+/-29+/-29+/-6 MeV. The first quoted error is the statistical uncertainty, and the rest estimate the sizes of higher order terms neglected in this calculation. All of these uncertainties are systematically improvable by including another order in the weak coupling expansion, the nonrelativistic expansion, or the Symanzik improvement program.
Prompt gravity signal induced by the 2011 Tohoku-Oki earthquake
Montagner, Jean-Paul; Juhel, Kévin; Barsuglia, Matteo; Ampuero, Jean Paul; Chassande-Mottin, Eric; Harms, Jan; Whiting, Bernard; Bernard, Pascal; Clévédé, Eric; Lognonné, Philippe
2016-01-01
Transient gravity changes are expected to occur at all distances during an earthquake rupture, even before the arrival of seismic waves. Here we report on the search of such a prompt gravity signal in data recorded by a superconducting gravimeter and broadband seismometers during the 2011 Mw 9.0 Tohoku-Oki earthquake. During the earthquake rupture, a signal exceeding the background noise is observed with a statistical significance higher than 99% and an amplitude of a fraction of μGal, consistent in sign and order of magnitude with theoretical predictions from a first-order model. While prompt gravity signal detection with state-of-the-art gravimeters and seismometers is challenged by background seismic noise, its robust detection with gravity gradiometers under development could open new directions in earthquake seismology, and overcome fundamental limitations of current earthquake early-warning systems imposed by the propagation speed of seismic waves. PMID:27874858
Long-Range Correlation in alpha-Wave Predominant EEG in Human
NASA Astrophysics Data System (ADS)
Sharif, Asif; Chyan Lin, Der; Kwan, Hon; Borette, D. S.
2004-03-01
The background noise in the alpha-predominant EEG taken from eyes-open and eyes-closed neurophysiological states is studied. Scale-free characteristic is found in both cases using the wavelet approach developed by Simonsen and Nes [1]. The numerical results further show the scaling exponent during eyes-closed is consistently lower than eyes-open. We conjecture the origin of this difference is related to the temporal reconfiguration of the neural network in the brain. To further investigate the scaling structure of the EEG background noise, we extended the second order statistics to higher order moments using the EEG increment process. We found that the background fluctuation in the alpha-predominant EEG is predominantly monofractal. Preliminary results are given to support this finding and its implication in brain functioning is discussed. [1] A.H. Simonsen and O.M. Nes, Physical Review E, 58, 2779¡V2748 (1998).
Prompt gravity signal induced by the 2011 Tohoku-Oki earthquake.
Montagner, Jean-Paul; Juhel, Kévin; Barsuglia, Matteo; Ampuero, Jean Paul; Chassande-Mottin, Eric; Harms, Jan; Whiting, Bernard; Bernard, Pascal; Clévédé, Eric; Lognonné, Philippe
2016-11-22
Transient gravity changes are expected to occur at all distances during an earthquake rupture, even before the arrival of seismic waves. Here we report on the search of such a prompt gravity signal in data recorded by a superconducting gravimeter and broadband seismometers during the 2011 Mw 9.0 Tohoku-Oki earthquake. During the earthquake rupture, a signal exceeding the background noise is observed with a statistical significance higher than 99% and an amplitude of a fraction of μGal, consistent in sign and order of magnitude with theoretical predictions from a first-order model. While prompt gravity signal detection with state-of-the-art gravimeters and seismometers is challenged by background seismic noise, its robust detection with gravity gradiometers under development could open new directions in earthquake seismology, and overcome fundamental limitations of current earthquake early-warning systems imposed by the propagation speed of seismic waves.
Prompt gravity anomaly induced to the 2011Tohoku-Oki earthquake
NASA Astrophysics Data System (ADS)
Montagner, Jean-Paul; Juhel, Kevin; Barsuglia, Matteo; Ampuero, Jean-Paul; Harms, Jan; Chassande-Mottin, Eric; Whiting, Bernard; Bernard, Pascal; Clévédé, Eric; Lognonné, Philippe
2017-04-01
Transient gravity changes are expected to occur at all distances during an earthquake rupture, even before the arrival of seismic waves. Here we report on the search of such a prompt gravity signal in data recorded by a superconducting gravimeter and broadband seismometers during the 2011 Mw 9.0 Tohoku-Oki earthquake. During the earthquake rupture, a signal exceeding the background noise is observed with a statistical significance higher than 99% and an amplitude of a fraction of μGal, consistent in sign and order-of-magnitude with theoretical predictions from a first-order model. While prompt gravity signal detection with state-of-the-art gravimeters and seismometers is challenged by background seismic noise, its robust detection with gravity gradiometers under development could open new directions in earthquake seismology, and overcome fundamental limitations of current earthquake early-warning systems (EEWS) imposed by the propagation speed of seismic waves.
Higher Education in the U.S.S.R.: Curriculums, Schools, and Statistics.
ERIC Educational Resources Information Center
Rosen, Seymour M.
This study is designed to provide more comprehensive information on Soviet higher learning emphasizing its increasingly close alignment with Soviet national planning and economy. Following introductory material, Soviet curriculums in higher education and schools and statistics are reviewed. Highlights include: (1) A major development in Soviet…
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Ten Years of Cloud Properties from MODIS: Global Statistics and Use in Climate Model Evaluation
NASA Technical Reports Server (NTRS)
Platnick, Steven E.
2011-01-01
The NASA Moderate Resolution Imaging Spectroradiometer (MODIS), launched onboard the Terra and Aqua spacecrafts, began Earth observations on February 24, 2000 and June 24,2002, respectively. Among the algorithms developed and applied to this sensor, a suite of cloud products includes cloud masking/detection, cloud-top properties (temperature, pressure), and optical properties (optical thickness, effective particle radius, water path, and thermodynamic phase). All cloud algorithms underwent numerous changes and enhancements between for the latest Collection 5 production version; this process continues with the current Collection 6 development. We will show example MODIS Collection 5 cloud climatologies derived from global spatial . and temporal aggregations provided in the archived gridded Level-3 MODIS atmosphere team product (product names MOD08 and MYD08 for MODIS Terra and Aqua, respectively). Data sets in this Level-3 product include scalar statistics as well as 1- and 2-D histograms of many cloud properties, allowing for higher order information and correlation studies. In addition to these statistics, we will show trends and statistical significance in annual and seasonal means for a variety of the MODIS cloud properties, as well as the time required for detection given assumed trends. To assist in climate model evaluation, we have developed a MODIS cloud simulator with an accompanying netCDF file containing subsetted monthly Level-3 statistical data sets that correspond to the simulator output. Correlations of cloud properties with ENSO offer the potential to evaluate model cloud sensitivity; initial results will be discussed.
NASA Astrophysics Data System (ADS)
Varner, Gary Sim
1999-11-01
Utilizing the world's largest sample of resonant y' decays, as measured by the Beijing Experimental Spectrometer (BES) during 1993-1995, a comprehensive study of the hadronic decay modes of the χc (3P1 Charmonium) states has been undertaken. Compared with the data set for the Mark I detector, whose published measurements of many of these hadronic decays have been definitive for almost 20 years, roughly an order of magnitude larger statistics has been obtained. Taking advantage of these larger statistics, many new hadronic decay modes have been discovered, while others have been refined. An array of first observations, improvements, confirmations or limits are reported with respect to current world values. These higher precision and newly discovered decay modes are an excellent testing ground for recent theoretical interest in the contribution of higher Fock states and the color octet mechanism in heavy quarkonium annihilation and subsequent light hadronization. Because these calculations are largely tractable only in two body decays, these are the focus of this dissertation. A comparison of current theoretical calculations and experimental results is presented, indicating the success of these phenomenological advances. Measurements for which there are as yet no suitable theoretical prediction are indicated.
Chen, Ming-Jen; Hsu, Hui-Tsung; Lin, Cheng-Li; Ju, Wei-Yuan
2012-10-01
Human exposure to acrylamide (AA) through consumption of French fries and other foods has been recognized as a potential health concern. Here, we used a statistical non-linear regression model, based on the two most influential factors, cooking temperature and time, to estimate AA concentrations in French fries. The R(2) of the predictive model is 0.83, suggesting the developed model was significant and valid. Based on French fry intake survey data conducted in this study and eight frying temperature-time schemes which can produce tasty and visually appealing French fries, the Monte Carlo simulation results showed that if AA concentration is higher than 168 ppb, the estimated cancer risk for adolescents aged 13-18 years in Taichung City would be already higher than the target excess lifetime cancer risk (ELCR), and that by taking into account this limited life span only. In order to reduce the cancer risk associated with AA intake, the AA levels in French fries might have to be reduced even further if the epidemiological observations are valid. Our mathematical model can serve as basis for further investigations on ELCR including different life stages and behavior and population groups. Copyright © 2012 Elsevier Ltd. All rights reserved.
Nonparametric Bayesian predictive distributions for future order statistics
Richard A. Johnson; James W. Evans; David W. Green
1999-01-01
We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...
Statistics of contractive cracking patterns. [frozen soil-water rheology
NASA Technical Reports Server (NTRS)
Noever, David A.
1991-01-01
The statistics of convective soil patterns are analyzed using statistical crystallography. An underlying hierarchy of order is found to span four orders of magnitude in characteristic pattern length. Strict mathematical requirements determine the two-dimensional (2D) topology, such that random partitioning of space yields a predictable statistical geometry for polygons. For all lengths, Aboav's and Lewis's laws are verified; this result is consistent both with the need to fill 2D space and most significantly with energy carried not by the patterns' interior, but by the boundaries. Together, this suggests a common mechanism of formation for both micro- and macro-freezing patterns.
Self-similarity in high Atwood number Rayleigh-Taylor experiments
NASA Astrophysics Data System (ADS)
Mikhaeil, Mark; Suchandra, Prasoon; Pathikonda, Gokul; Ranjan, Devesh
2017-11-01
Self-similarity is a critical concept in turbulent and mixing flows. In the Rayleigh-Taylor instability, theory and simulations have shown that the flow exhibits properties of self-similarity as the mixing Reynolds number exceeds 20000 and the flow enters the turbulent regime. Here, we present results from the first large Atwood number (0.7) Rayleigh-Taylor experimental campaign for mixing Reynolds number beyond 20000 in an effort to characterize the self-similar nature of the instability. Experiments are performed in a statistically steady gas tunnel facility, allowing for the evaluation of turbulence statistics. A visualization diagnostic is used to study the evolution of the mixing width as the instability grows. This allows for computation of the instability growth rate. For the first time in such a facility, stereoscopic particle image velocimetry is used to resolve three-component velocity information in a plane. Velocity means, fluctuations, and correlations are considered as well as their appropriate scaling. Probability density functions of velocity fields, energy spectra, and higher-order statistics are also presented. The energy budget of the flow is described, including the ratio of the kinetic energy to the released potential energy. This work was supported by the DOE-NNSA SSAA Grant DE-NA0002922.
Extraction of phase information in daily stock prices
NASA Astrophysics Data System (ADS)
Fujiwara, Yoshi; Maekawa, Satoshi
2000-06-01
It is known that, in an intermediate time-scale such as days, stock market fluctuations possess several statistical properties that are common to different markets. Namely, logarithmic returns of an asset price have (i) truncated Pareto-Lévy distribution, (ii) vanishing linear correlation, (iii) volatility clustering and its power-law autocorrelation. The fact (ii) is a consequence of nonexistence of arbitragers with simple strategies, but this does not mean statistical independence of market fluctuations. Little attention has been paid to temporal structure of higher-order statistics, although it contains some important information on market dynamics. We applied a signal separation technique, called Independent Component Analysis (ICA), to actual data of daily stock prices in Tokyo and New York Stock Exchange (TSE/NYSE). ICA does a linear transformation of lag vectors from time-series to find independent components by a nonlinear algorithm. We obtained a similar impulse response for these dataset. If it were a Martingale process, it can be shown that impulse response should be a delta-function under a few conditions that could be numerically checked and as was verified by surrogate data. This result would provide information on the market dynamics including speculative bubbles and arbitrating processes. .
Liu, Huiling; Xia, Bingbing; Yi, Dehui
2016-01-01
We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407
Hosseini Koupaie, E; Barrantes Leiva, M; Eskicioglu, C; Dutil, C
2014-01-01
The feasibility of anaerobic co-digestion of two juice-based beverage industrial wastes, screen cake (SC) and thickened waste activated sludge (TWAS), along with municipal sludge cake (MC) was investigated. Experiments were conducted in twenty mesophilic batch 160 ml serum bottles with no inhibition occurred. The statistical analysis proved that the substrate type had statistically significant effect on both ultimate biogas and methane yields (P=0.0003<0.05). The maximum and minimum ultimate cumulative methane yields were 890.90 and 308.34 mL/g-VSremoved from the digesters containing only TWAS and SC as substrate. First-order reaction model well described VS utilization in all digesters. The first 2-day and 10-day specific biodegradation rate constants were statistically higher in the digesters containing SC (P=0.004<0.05) and MC (P=0.0005<0.05), respectively. The cost-benefit analysis showed that the capital, operating and total costs can be decreased by 21.5%, 29.8% and 27.6%, respectively using a co-digester rather than two separate digesters. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pasta Nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium
NASA Astrophysics Data System (ADS)
Caplan, Matthew; Horowitz, Charles; da Silva Schneider, Andre; Berry, Donald
2014-09-01
We simulate the decompression of cold dense nuclear matter, near the nuclear saturation density, in order to study the role of nuclear pasta in r-process nucleosynthesis in neutron star mergers. Our simulations are performed using a classical molecular dynamics model with 51 200 and 409 600 nucleons, and are run on GPUs. We expand our simulation region to decompress systems from initial densities of 0.080 fm-3 down to 0.00125 fm-3. We study proton fractions of YP = 0.05, 0.10, 0.20, 0.30, and 0.40 at T = 0.5, 0.75, and 1 MeV. We calculate the composition of the resulting systems using a cluster algorithm. This composition is in good agreement with nuclear statistical equilibrium models for temperatures of 0.75 and 1 MeV. However, for proton fractions greater than YP = 0.2 at a temperature of T = 0.5 MeV, the MD simulations produce non-equilibrium results with large rod-like nuclei. Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.
Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting
Husen, Mohd Nizam; Lee, Sukhan
2016-01-01
A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis. PMID:27845711
Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting.
Husen, Mohd Nizam; Lee, Sukhan
2016-11-11
A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis.
No-Impact Threshold Values for NRAP's Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Last, George V.; Murray, Christopher J.; Brown, Christopher F.
2013-02-01
The purpose of this study was to develop methodologies for establishing baseline datasets and statistical protocols for determining statistically significant changes between background concentrations and predicted concentrations that would be used to represent a contamination plume in the Gen II models being developed by NRAP’s Groundwater Protection team. The initial effort examined selected portions of two aquifer systems; the urban shallow-unconfined aquifer system of the Edwards-Trinity Aquifer System (being used to develop the ROM for carbon-rock aquifers, and the a portion of the High Plains Aquifer (an unconsolidated and semi-consolidated sand and gravel aquifer, being used to development the ROMmore » for sandstone aquifers). Threshold values were determined for Cd, Pb, As, pH, and TDS that could be used to identify contamination due to predicted impacts from carbon sequestration storage reservoirs, based on recommendations found in the EPA’s ''Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities'' (US Environmental Protection Agency 2009). Results from this effort can be used to inform a ''no change'' scenario with respect to groundwater impacts, rather than the use of an MCL that could be significantly higher than existing concentrations in the aquifer.« less
Laser Vision Correction with Q Factor Modification for Keratoconus Management.
Pahuja, Natasha Kishore; Shetty, Rohit; Sinha Roy, Abhijit; Thakkar, Maithil Mukesh; Jayadev, Chaitra; Nuijts, Rudy Mma; Nagaraja, Harsha
2017-04-01
To evaluate the outcomes of corneal laser ablation with Q factor modification for vision correction in patients with progressive keratoconus. In this prospective study, 50 eyes of 50 patients were divided into two groups based on Q factor (>-1 in Group I and ≤-1 in Group II). All patients underwent a detailed ophthalmic examination including uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), subjective acceptance and corneal topography using the Pentacam. The topolyzer was used to measure the corneal asphericity (Q). Ablation was performed based on the preoperative Q values and thinnest pachymetry to obtain a target of near normal Q. This was followed by corneal collagen crosslinking to stabilize the progression. Statistically significant improvement (p ≤ 0.05) was noticed in refractive, topographic, and Q values posttreatment in both groups. The improvement in higher-order aberrations and total aberrations were statistically significant in both groups; however, the spherical aberration showed statistically significant improvement only in Group II. Ablation based on the preoperative Q and pachymetry for a near normal postoperative Q value appears to be an effective method to improve the visual acuity and quality in patients with keratoconus.
Correlations between corneal and total wavefront aberrations
NASA Astrophysics Data System (ADS)
Mrochen, Michael; Jankov, Mirko; Bueeler, Michael; Seiler, Theo
2002-06-01
Purpose: Corneal topography data expressed as corneal aberrations are frequently used to report corneal laser surgery results. However, the optical image quality at the retina depends on all optical elements of the eye such as the human lens. Thus, the aim of this study was to investigate the correlations between the corneal and total wavefront aberrations and to discuss the importance of corneal aberrations for representing corneal laser surgery results. Methods: Thirty three eyes of 22 myopic subjects were measured with a corneal topography system and a Tschernig-type wavefront analyzer after the pupils were dilated to at least 6 mm in diameter. All measurements were centered with respect to the line of sight. Corneal and total wavefront aberrations were calculated up to the 6th Zernike order in the same reference plane. Results: Statistically significant correlations (p < 0.05) between the corneal and total wavefront aberrations were found for the astigmatism (C3,C5) and all 3rd Zernike order coefficients such as coma (C7,C8). No statistically significant correlations were found for all 4th to 6th order Zernike coefficients except for the 5th order horizontal coma C18 (p equals 0.003). On average, all Zernike coefficients for the corneal aberrations were found to be larger compared to Zernike coefficients for the total wavefront aberrations. Conclusions: Corneal aberrations are only of limited use for representing the optical quality of the human eye after corneal laser surgery. This is due to the lack of correlation between corneal and total wavefront aberrations in most of the higher order aberrations. Besides this, the data present in this study yield towards an aberration balancing between corneal aberrations and the optical elements within the eye that reduces the aberration from the cornea by a certain degree. Consequently, ideal customized ablations have to take both, corneal and total wavefront aberrations, into consideration.
Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform
NASA Astrophysics Data System (ADS)
Pando, Jesus
1997-10-01
The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)
Graczyk, Michelle B; Duarte Queirós, Sílvio M
2016-01-01
We study the intraday behaviour of the statistical moments of the trading volume of the blue chip equities that composed the Dow Jones Industrial Average index between 2003 and 2014. By splitting that time interval into semesters, we provide a quantitative account of the nonstationary nature of the intraday statistical properties as well. Explicitly, we prove the well-known ∪-shape exhibited by the average trading volume-as well as the volatility of the price fluctuations-experienced a significant change from 2008 (the year of the "subprime" financial crisis) onwards. That has resulted in a faster relaxation after the market opening and relates to a consistent decrease in the convexity of the average trading volume intraday profile. Simultaneously, the last part of the session has become steeper as well, a modification that is likely to have been triggered by the new short-selling rules that were introduced in 2007 by the Securities and Exchange Commission. The combination of both results reveals that the ∪ has been turning into a ⊔. Additionally, the analysis of higher-order cumulants-namely the skewness and the kurtosis-shows that the morning and the afternoon parts of the trading session are each clearly associated with different statistical features and hence dynamical rules. Concretely, we claim that the large initial trading volume is due to wayward stocks whereas the large volume during the last part of the session hinges on a cohesive increase of the trading volume. That dissimilarity between the two parts of the trading session is stressed in periods of higher uproar in the market.
Substance-dependence rehab treatment in Thailand: a meta analysis.
Verachai, Viroj; Kittipichai, Wirin; Konghom, Suwapat; Lukanapichonchut, Lumsum; Sinlapasacran, Narong; Kimsongneun, Nipa; Rergarun, Prachern; Doungnimit, Amawasee
2009-12-01
To synthesize the substance-dependence researches focusing on rehab treatment phase. Several criteria were used to select studies for meta analysis. Firstly, the research must have focused on the rehab period on the substance-dependence treatment, secondly, only quantitative researches that used statistics to calculate effect sizes were selected, and thirdly, all researches were from Thai libraries and were done during 1997-2006. The instrument used for data collection was comprised of two sets. The first used to collect the general information of studies including the crucial statistics and test statistics. The second was used to assess the quality of studies. Results from synthesizing 32 separate studies found that 323 effect sizes were computed in terms of the correlation coefficient "r". The psychology approach rehab program was higher in effect size than the network approach (p < 0.05). Additionally, Quasi-experimental studies were higher in effect size than correlation studies (p < 0.05). Among the quasi-experimental studies it was found that TCs revealed the highest effect size (r = 0.76). Among the correlation studies, it was found that the motivation program revealed the highest effect size (r = 0.84). The substance-use rehab treatment programs in Thailand which revealed the high effect size should be adjusted to the current program. However, the narcotic studies which focus on the rehab phase should be synthesized every 5-10 years in order to integrate new concept into the development of future the substance-dependence rehab treatment program, especially those at the research unit of the Drug Dependence Treatment Institute/Centers in Thailand.
Numerical comparison of Riemann solvers for astrophysical hydrodynamics
NASA Astrophysics Data System (ADS)
Klingenberg, Christian; Schmidt, Wolfram; Waagan, Knut
2007-11-01
The idea of this work is to compare a new positive and entropy stable approximate Riemann solver by Francois Bouchut with a state-of the-art algorithm for astrophysical fluid dynamics. We implemented the new Riemann solver into an astrophysical PPM-code, the Prometheus code, and also made a version with a different, more theoretically grounded higher order algorithm than PPM. We present shock tube tests, two-dimensional instability tests and forced turbulence simulations in three dimensions. We find subtle differences between the codes in the shock tube tests, and in the statistics of the turbulence simulations. The new Riemann solver increases the computational speed without significant loss of accuracy.
Evidence of tampering in watermark identification
NASA Astrophysics Data System (ADS)
McLauchlan, Lifford; Mehrübeoglu, Mehrübe
2009-08-01
In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.
Slow kinetics of Brownian maxima.
Ben-Naim, E; Krapivsky, P L
2014-07-18
We study extreme-value statistics of Brownian trajectories in one dimension. We define the maximum as the largest position to date and compare maxima of two particles undergoing independent Brownian motion. We focus on the probability P(t) that the two maxima remain ordered up to time t and find the algebraic decay P ∼ t(-β) with exponent β = 1/4. When the two particles have diffusion constants D(1) and D(2), the exponent depends on the mobilities, β = (1/π) arctan sqrt[D(2)/D(1)]. We also use numerical simulations to investigate maxima of multiple particles in one dimension and the largest extension of particles in higher dimensions.
Hu, Ting; Chen, Yuanzhu; Kiralis, Jeff W; Collins, Ryan L; Wejse, Christian; Sirugo, Giorgio; Williams, Scott M; Moore, Jason H
2013-01-01
Background Epistasis has been historically used to describe the phenomenon that the effect of a given gene on a phenotype can be dependent on one or more other genes, and is an essential element for understanding the association between genetic and phenotypic variations. Quantifying epistasis of orders higher than two is very challenging due to both the computational complexity of enumerating all possible combinations in genome-wide data and the lack of efficient and effective methodologies. Objectives In this study, we propose a fast, non-parametric, and model-free measure for three-way epistasis. Methods Such a measure is based on information gain, and is able to separate all lower order effects from pure three-way epistasis. Results Our method was verified on synthetic data and applied to real data from a candidate-gene study of tuberculosis in a West African population. In the tuberculosis data, we found a statistically significant pure three-way epistatic interaction effect that was stronger than any lower-order associations. Conclusion Our study provides a methodological basis for detecting and characterizing high-order gene-gene interactions in genetic association studies. PMID:23396514
Exact extreme-value statistics at mixed-order transitions.
Bar, Amir; Majumdar, Satya N; Schehr, Grégory; Mukamel, David
2016-05-01
We study extreme-value statistics for spatially extended models exhibiting mixed-order phase transitions (MOT). These are phase transitions that exhibit features common to both first-order (discontinuity of the order parameter) and second-order (diverging correlation length) transitions. We consider here the truncated inverse distance squared Ising model, which is a prototypical model exhibiting MOT, and study analytically the extreme-value statistics of the domain lengths The lengths of the domains are identically distributed random variables except for the global constraint that their sum equals the total system size L. In addition, the number of such domains is also a fluctuating variable, and not fixed. In the paramagnetic phase, we show that the distribution of the largest domain length l_{max} converges, in the large L limit, to a Gumbel distribution. However, at the critical point (for a certain range of parameters) and in the ferromagnetic phase, we show that the fluctuations of l_{max} are governed by novel distributions, which we compute exactly. Our main analytical results are verified by numerical simulations.
Schmidt, P J; Pintar, K D M; Fazil, A M; Flemming, C A; Lanthier, M; Laprade, N; Sunohara, M D; Simhon, A; Thomas, J L; Topp, E; Wilkes, G; Lapen, D R
2013-06-15
Human campylobacteriosis is the leading bacterial gastrointestinal illness in Canada; environmental transmission has been implicated in addition to transmission via consumption of contaminated food. Information about Campylobacter spp. occurrence at the watershed scale will enhance our understanding of the associated public health risks and the efficacy of source water protection strategies. The overriding purpose of this study is to provide a quantitative framework to assess and compare the relative public health significance of watershed microbial water quality associated with agricultural BMPs. A microbial monitoring program was expanded from fecal indicator analyses and Campylobacter spp. presence/absence tests to the development of a novel, 11-tube most probable number (MPN) method that targeted Campylobacter jejuni, Campylobacter coli, and Campylobacter lari. These three types of data were used to make inferences about theoretical risks in a watershed in which controlled tile drainage is widely practiced, an adjacent watershed with conventional (uncontrolled) tile drainage, and reference sites elsewhere in the same river basin. E. coli concentrations (MPN and plate count) in the controlled tile drainage watershed were statistically higher (2008-11), relative to the uncontrolled tile drainage watershed, but yearly variation was high as well. Escherichia coli loading for years 2008-11 combined were statistically higher in the controlled watershed, relative to the uncontrolled tile drainage watershed, but Campylobacter spp. loads for 2010-11 were generally higher for the uncontrolled tile drainage watershed (but not statistically significant). Using MPN data and a Bayesian modelling approach, higher mean Campylobacter spp. concentrations were found in the controlled tile drainage watershed relative to the uncontrolled tile drainage watershed (2010, 2011). A second-order quantitative microbial risk assessment (QMRA) was used, in a relative way, to identify differences in mean Campylobacter spp. infection risks among monitoring sites for a hypothetical exposure scenario. Greater relative mean risks were obtained for sites in the controlled tile drainage watershed than in the uncontrolled tile drainage watershed in each year of monitoring with pair-wise posterior probabilities exceeding 0.699, and the lowest relative mean risks were found at a downstream drinking water intake reference site. The second-order modelling approach was used to partition sources of uncertainty, which revealed that an adequate representation of the temporal variation in Campylobacter spp. concentrations for risk assessment was achieved with as few as 10 MPN data per site. This study demonstrates for the first time how QMRA can be implemented to evaluate, in a relative sense, the public health implications of controlled tile drainage on watershed-scale water quality. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Comparing international crash statistics
DOT National Transportation Integrated Search
1999-12-01
In order to examine national developments in traffic safety, crash statistics from several of the more safety, crash statistics from several of the more United States. Data obtained from the Fatality Analysis Reporting System (FARS) and the Internati...
Ma, Jing; Cao, Nan-Jue; Xia, Li-Kun
2016-01-01
To identify possible differences of efficacy, safety, predictability, higher-order aberrations and corneal biomechnical parameters after small-incision lenticule extraction (SMILE) and femtosecond lenticule extraction (FLEx). A systematic literature retrieval was conducted in Medline, Embase and the Cochrane Library, up to October, 2015. The included studies were subject to a Meta-analysis. Comparison between SMILE and FLEx was measured as pooled odds ratio (OR) or weighted mean differences (WMD). Of 95% confidence intervals (CI) were used to analyze data. A total of seven studies were included. Firstly, there were no differences in uncorrected distance visual acuity (UDVA) 20/20 or better (OR, 1.37; 95% CI, 0.69 to 2.69; P=0.37) and logMAR UDVA (WMD, -0.02; 95% CI, -0.05 to 0.01; P=0.17) after SMILE versus FLEx. We found no differences in corrected distance visual acuity (CDVA) unchanged (OR, 0.98; 95% CI, 0.46 to 2.11; P=0.97) and logMAR CDVA (WMD, -0.00; 95% CI, -0.01 to 0.01; P=0.90) either. Secondly, we found no differences in refraction within ±1.00 D (OR, 0.98; 95% CI, 0.13 to 7.28; P=0.99) and ±0.50 D (OR, 1.62; 95% CI, 0.62 to 4.28; P=0.33) of target postoperatively. Thirdly, for higher-order aberrations, we found no differences in the total higher-order aberrations (WMD, -0.04; 95% CI, -0.09 to 0.01; P=0.14), coma (WMD, -0.04; 95% CI, -0.09 to 0.01; P=0.11), spherical (WMD, 0.01; 95% CI, -0.02 to 0.03; P=0.60) and trefoil (WMD, -0.00; 95% CI, -0.04 to 0.03; P=0.76). Furthermore, for corneal biomechanical parameters, we also found no differences (WMD, 0.08; 95% CI, -0.17 to 0.33; P=0.54) after SMILE versus FLEx. There are no statistically differences in efficacy, safety, predictability, higher-order aberrations and corneal biomechnical parameters postoperative between SMILE and FLEx.
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
Non-local Second Order Closure Scheme for Boundary Layer Turbulence and Convection
NASA Astrophysics Data System (ADS)
Meyer, Bettina; Schneider, Tapio
2017-04-01
There has been scientific consensus that the uncertainty in the cloud feedback remains the largest source of uncertainty in the prediction of climate parameters like climate sensitivity. To narrow down this uncertainty, not only a better physical understanding of cloud and boundary layer processes is required, but specifically the representation of boundary layer processes in models has to be improved. General climate models use separate parameterisation schemes to model the different boundary layer processes like small-scale turbulence, shallow and deep convection. Small scale turbulence is usually modelled by local diffusive parameterisation schemes, which truncate the hierarchy of moment equations at first order and use second-order equations only to estimate closure parameters. In contrast, the representation of convection requires higher order statistical moments to capture their more complex structure, such as narrow updrafts in a quasi-steady environment. Truncations of moment equations at second order may lead to more accurate parameterizations. At the same time, they offer an opportunity to take spatially correlated structures (e.g., plumes) into account, which are known to be important for convective dynamics. In this project, we study the potential and limits of local and non-local second order closure schemes. A truncation of the momentum equations at second order represents the same dynamics as a quasi-linear version of the equations of motion. We study the three-dimensional quasi-linear dynamics in dry and moist convection by implementing it in a LES model (PyCLES) and compare it to a fully non-linear LES. In the quasi-linear LES, interactions among turbulent eddies are suppressed but nonlinear eddy—mean flow interactions are retained, as they are in the second order closure. In physical terms, suppressing eddy—eddy interactions amounts to suppressing, e.g., interactions among convective plumes, while retaining interactions between plumes and the environment (e.g., entrainment and detrainment). In a second part, we employ the possibility to include non-local statistical correlations in a second-order closure scheme. Such non-local correlations allow to directly incorporate the spatially coherent structures that occur in the form of convective updrafts penetrating the boundary layer. This allows us to extend the work that has been done using assumed-PDF schemes for parameterising boundary layer turbulence and shallow convection in a non-local sense.
Financial Statistics of Institutions of Higher Education: Fiscal Year 1979. State Data.
ERIC Educational Resources Information Center
Brandt, Norman J.
Financial statistics of institutions of higher education were surveyed. The 14th annual Higher Education General Information Survey (HEGIS XIV) was mailed to all institutions listed in the Educational Directory, Colleges and Universities, 1978-79. Completed survey forms were received from 2,909 institutions (91.7 percent). Data were imputed for…
Perception of second- and third-order orientation signals and their interactions
Victor, Jonathan D.; Thengone, Daniel J.; Conte, Mary M.
2013-01-01
Orientation signals, which are crucial to many aspects of visual function, are more complex and varied in the natural world than in the stimuli typically used for laboratory investigation. Gratings and lines have a single orientation, but in natural stimuli, local features have multiple orientations, and multiple orientations can occur even at the same location. Moreover, orientation cues can arise not only from pairwise spatial correlations, but from higher-order ones as well. To investigate these orientation cues and how they interact, we examined segmentation performance for visual textures in which the strengths of different kinds of orientation cues were varied independently, while controlling potential confounds such as differences in luminance statistics. Second-order cues (the kind present in gratings) at different orientations are largely processed independently: There is no cancellation of positive and negative signals at orientations that differ by 45°. Third-order orientation cues are readily detected and interact only minimally with second-order cues. However, they combine across orientations in a different way: Positive and negative signals largely cancel if the orientations differ by 90°. Two additional elements are superimposed on this picture. First, corners play a special role. When second-order orientation cues combine to produce corners, they provide a stronger signal for texture segregation than can be accounted for by their individual effects. Second, while the object versus background distinction does not influence processing of second-order orientation cues, this distinction influences the processing of third-order orientation cues. PMID:23532909
Chan, Tommy C Y; Ng, Alex L K; Cheng, George P M; Wang, Zheng; Woo, Victor C P; Jhanji, Vishal
2016-10-01
To investigate the stability of corneal astigmatism and higher-order aberrations after combined femtosecond-assisted phacoemulsification and arcuate keratotomy. Retrospective, interventional case series. Surgery was performed using a VICTUS (Bausch & Lomb Inc, Dornach, Germany) platform. A single, 450-μm deep, arcuate keratotomy was paired at the 8-mm zone with the main phacoemulsification incision in the opposite meridian. The keratotomy incisions were not opened. Corneal astigmatism and higher-order aberration measurements obtained preoperatively and at 2 months and 2 years postoperatively were analyzed. Fifty eyes of 50 patients (mean age 66.2 ± 10.5 years) were included. The mean preoperative corneal astigmatism was 1.35 ± 0.48 diopters (D). This was reduced to 0.67 ± 0.54 D at 2 months and 0.74 ± 0.53 D at 2 years postoperatively (P < .001). There was no statistically significant difference between postoperative corneal astigmatism over 2 years (P = .392). Both magnitude of error and absolute angle of error were comparable between the 2 postoperative time points (P > .283). At postoperative 2 months and 2 years, 72% and 70% of eyes were within 15 degrees of preoperative meridian of astigmatism, respectively. All wavefront measurements increased significantly at 2 months and 2 years (P < .007), except spherical aberration (P > .150). There was no significant difference in higher-order aberrations between 2 months and 2 years postoperatively (P > .486). Our study showed the stability of femtosecond-assisted arcuate keratotomy. Further studies using other platforms and nomograms are needed to corroborate the findings of this study. Copyright © 2016 Elsevier Inc. All rights reserved.
Higher-order phase transitions on financial markets
NASA Astrophysics Data System (ADS)
Kasprzak, A.; Kutner, R.; Perelló, J.; Masoliver, J.
2010-08-01
Statistical and thermodynamic properties of the anomalous multifractal structure of random interevent (or intertransaction) times were thoroughly studied by using the extended continuous-time random walk (CTRW) formalism of Montroll, Weiss, Scher, and Lax. Although this formalism is quite general (and can be applied to any interhuman communication with nontrivial priority), we consider it in the context of a financial market where heterogeneous agent activities can occur within a wide spectrum of time scales. As the main general consequence, we found (by additionally using the Saddle-Point Approximation) the scaling or power-dependent form of the partition function, Z(q'). It diverges for any negative scaling powers q' (which justifies the name anomalous) while for positive ones it shows the scaling with the general exponent τ(q'). This exponent is the nonanalytic (singular) or noninteger power of q', which is one of the pilar of higher-order phase transitions. In definition of the partition function we used the pausing-time distribution (PTD) as the central one, which takes the form of convolution (or superstatistics used, e.g. for describing turbulence as well as the financial market). Its integral kernel is given by the stretched exponential distribution (often used in disordered systems). This kernel extends both the exponential distribution assumed in the original version of the CTRW formalism (for description of the transient photocurrent measured in amorphous glassy material) as well as the Gaussian one sometimes used in this context (e.g. for diffusion of hydrogen in amorphous metals or for aging effects in glasses). Our most important finding is the third- and higher-order phase transitions, which can be roughly interpreted as transitions between the phase where high frequency trading is most visible and the phase defined by low frequency trading. The specific order of the phase transition directly depends upon the shape exponent α defining the stretched exponential integral kernel. On this basis a simple practical hint for investors was formulated.
Health profiles in people with intellectual developmental disorders.
Folch-Mas, Anabel; Cortés-Ruiz, María José; Vicens Calderón, Paloma; Martínez-Leal, Rafael
2017-01-01
To better understand the health profiles of people with intellectual disability (ID), focusing on the variables that are associated with a poorer health status. Data were collected from the Survey on Disability, Personal Autonomy and Dependency (EDAD 2008) of the Spanish National Statistics Institute (INE). The health data of 2840 subjects with IDD were analyzed in order to verify the impact of different variables on their health profiles. People with severe and profound levels of IDD presented a higher number of medical diagnoses. At residence centers there was a larger proportion of individuals with a higher prevalence of chronic diseases and more severe conditions; age also was an important factor. The health profiles of individuals with IDD differ depending on the severity level of their IDD and their degree of institutionalization. Further research is needed to provide better health care for people with IDD.
E-Learning in Croatian Higher Education: An Analysis of Students' Perceptions
NASA Astrophysics Data System (ADS)
Dukić, Darko; Andrijanić, Goran
2010-06-01
Over the last years, e-learning has taken an important role in Croatian higher education as a result of strategies defined and measures undertaken. Nonetheless, in comparison to the developed countries, the achievements in e-learning implementation are still unsatisfactory. Therefore, the efforts to advance e-learning within Croatian higher education need to be intensified. It is further necessary to undertake ongoing activities in order to solve possible problems in e-learning system functioning, which requires the development of adequate evaluation instruments and methods. One of the key steps in this process would be examining and analyzing users' attitudes. This paper presents a study of Croatian students' perceptions with regard to certain aspects of e-learning usage. Given the character of this research, adequate statistical methods were required for the data processing. The results of the analysis indicate that, for the most part, Croatian students have positive perceptions of e-learning, particularly as support to time-honored forms of teaching. However, they are not prepared to completely give up the traditional classroom. Using factor analysis, we identified four underlying factors of a collection of variables related to students' perceptions of e-learning. Furthermore, a certain number of statistically significant differences in student attitudes have been confirmed, in terms of gender and year of study. In our study we used discriminant analysis to determine discriminant functions that distinguished defined groups of students. With this research we managed to a certain degree to alleviate the current data insufficiency in the area of e-learning evaluation among Croatian students. Since this type of learning is gaining in importance within higher education, such analyses have to be conducted continuously.
Sero-epidemiological study of Lyme disease among high-risk population groups in eastern Slovakia.
Zákutná, Ľubica; Dorko, Erik; Mattová, Eva; Rimárová, Kvetoslava
2015-01-01
IIntroduction and objective. The aim of the presented cross-sectional sero-epidemiological study was to determine the current presence of antibodies against B. burgdorferi s.l. in the high-risk groups of the Slovak population, and to identify potential risk factors to LB infections. A group of 277 agricultural and forestry workers - persons with frequent stay in the countryside and employees of State Border and Customs Police - from years 2011-2012 in the Eastern Slovakia were examined in order to assess the seroprevalence of anti-Borrelia antibodies. Sera were screened by commercial enzyme-linked immunosorbent assay (ELISA). The study subjects completed a questionnaires with general demographic, epidemiological and clinical data. The results were evaluated statistically. A 25.3% presence of positive and 8.7% presence of borderline IgG antibodies was detected in all investigated groups. The seroprevalence of B. burgdorferi s.l. was significantly higher (P<0.05) among the agricultural and forestry workers when compared to employees of State Border and Customs Police. Higher seropositivity was observed in older subjects over 30 years of age (P=0.004) than those who were younger, and also in males (P=0.045). A significant number of persons with rheumatologic conditions was statistically higher (P=0.020) in the group with seropositivity than in the group with seronegativity. The presented study confirms the higher risk of Borrelia infection in individuals with frequent exposure to ticks in eastern Slovakia. The seropositivity tests confirmed the highest seropositivity in agriculture and forestry workers, middle positivity was confirmed among other sector workers, and lowest positivity in policemen and employees of the Customs and Border Inspection. The outputs also simultaneously filling the gap of missing seroprevalence data among these exposed groups.
Pattern statistics on Markov chains and sensitivity to parameter estimation
Nuel, Grégory
2006-01-01
Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). Results: In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation. PMID:17044916
Pattern statistics on Markov chains and sensitivity to parameter estimation.
Nuel, Grégory
2006-10-17
In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of sigma, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Depression and Self-Esteem in Early Adolescence.
Tripković, Ingrid; Roje, Romilda; Krnić, Silvana; Nazor, Mirjana; Karin, Željka; Čapkun, Vesna
2015-06-01
Depression prevalence has increased in the last few decades, affecting younger age groups. The aim of this research was to determine the range of depression and low self-esteem in elementary school children in the city of Split. Testing was carried out at school and the sample comprised 1,549 children (714 boys and 832 girls, aged 13). Two psychological instruments were used: the Coopersmith Self-Esteem Inventory (SEI) and the Children and Adolescent Depression Scale (SDD). The average value of scores obtained by SEI test was 17.8 for all tested children. No statistically significant difference was found be-tween boys and girls. It was found that 11.9% of children showed signs of clinically significant depression, and 16.2% showed signs of depression. Statistically significant association between low self-esteem and clinically significant depression was found. No statistically significant difference among boys and girls according to dimension of cognitive depression was found, whereas statistically significant level of emotional depression was higher in girls than boys. It was found that both dimensions of depression decreased proportionally with the increase of SEI test score values: cognitive and emotional dimension of depression. The results of this study show that it is necessary to provide early detection of emotional difficulties in order to prevent serious mental disorders. Copyright© by the National Institute of Public Health, Prague 2015.
A robust embedded vision system feasible white balance algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuan; Yu, Feihong
2018-01-01
White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.
IMPRINTS OF EXPANSION ON THE LOCAL ANISOTROPY OF SOLAR WIND TURBULENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdini, Andrea; Grappin, Roland
2015-08-01
We study the anisotropy of II-order structure functions (SFs) defined in a frame attached to the local mean field in three-dimensional (3D) direct numerical simulations of magnetohydrodynamic turbulence, with the solar wind expansion both included and not included. We simulate spacecraft flybys through the numerical domain by taking increments along the radial (wind) direction that form an angle of 45° with the ambient magnetic field. We find that only when expansion is taken into account do the synthetic observations match the 3D anisotropy observed in the solar wind, including the change of anisotropy with scale. Our simulations also show thatmore » the anisotropy changes dramatically when considering increments oblique to the radial directions. Both results can be understood by noting that expansion reduces the radial component of the magnetic field at all scales, thus confining fluctuations in the plane perpendicular to the radial. Expansion is thus shown to affect not only the (global) spectral anisotropy, but also the local anisotropy of second-order SF by influencing the distribution of the local mean field, which enters this higher-order statistics.« less
Isotropic–Nematic Phase Transitions in Gravitational Systems. II. Higher Order Multipoles
NASA Astrophysics Data System (ADS)
Takács, Ádám; Kocsis, Bence
2018-04-01
The gravitational interaction among bodies orbiting in a spherical potential leads to the rapid relaxation of the orbital planes’ distribution, a process called vector resonant relaxation. We examine the statistical equilibrium of this process for a system of bodies with similar semimajor axes and eccentricities. We extend the previous model of Roupas et al. by accounting for the multipole moments beyond the quadrupole, which dominate the interaction for radially overlapping orbits. Nevertheless, we find no qualitative differences between the behavior of the system with respect to the model restricted to the quadrupole interaction. The equilibrium distribution resembles a counterrotating disk at low temperature and a spherical structure at high temperature. The system exhibits a first-order phase transition between the disk and the spherical phase in the canonical ensemble if the total angular momentum is below a critical value. We find that the phase transition erases the high-order multipoles, i.e., small-scale structure in angular momentum space, most efficiently. The system admits a maximum entropy and a maximum energy, which lead to the existence of negative temperature equilibria.
Local Versus Remote Contributions of Soil Moisture to Near-Surface Temperature Variability
NASA Technical Reports Server (NTRS)
Koster, R.; Schubert, S.; Wang, H.; Chang, Y.
2018-01-01
Soil moisture variations have a straightforward impact on overlying air temperatures, wetter soils can induce higher evaporative cooling of the soil and thus, locally, cooler temperatures overall. Not known, however, is the degree to which soil moisture variations can affect remote air temperatures through their impact on the atmospheric circulation. In this talk we describe a two-pronged analysis that addresses this question. In the first segment, an extensive ensemble of NASA/GSFC GEOS-5 atmospheric model simulations is analyzed statistically to isolate and quantify the contributions of various soil moisture states, both local and remote, to the variability of air temperature at a given local site. In the second segment, the relevance of the derived statistical relationships is evaluated by applying them to observations-based data. Results from the second segment suggest that the GEOS-5-based relationships do, at least to first order, hold in nature and thus may provide some skill to forecasts of air temperature at subseasonal time scales, at least in certain regions.
Gender and Age Related Effects While Watching TV Advertisements: An EEG Study.
Cartocci, Giulia; Cherubino, Patrizia; Rossi, Dario; Modica, Enrica; Maglione, Anton Giulio; di Flumeri, Gianluca; Babiloni, Fabio
2016-01-01
The aim of the present paper is to show how the variation of the EEG frontal cortical asymmetry is related to the general appreciation perceived during the observation of TV advertisements, in particular considering the influence of the gender and age on it. In particular, we investigated the influence of the gender on the perception of a car advertisement (Experiment 1) and the influence of the factor age on a chewing gum commercial (Experiment 2). Experiment 1 results showed statistically significant higher approach values for the men group throughout the commercial. Results from Experiment 2 showed significant lower values by older adults for the spot, containing scenes not very enjoyed by them. In both studies, there was no statistical significant difference in the scene relative to the product offering between the experimental populations, suggesting the absence in our study of a bias towards the specific product in the evaluated populations. These evidences state the importance of the creativity in advertising, in order to attract the target population.
Gender and Age Related Effects While Watching TV Advertisements: An EEG Study
Cartocci, Giulia; Cherubino, Patrizia; Rossi, Dario; Modica, Enrica; Maglione, Anton Giulio; di Flumeri, Gianluca; Babiloni, Fabio
2016-01-01
The aim of the present paper is to show how the variation of the EEG frontal cortical asymmetry is related to the general appreciation perceived during the observation of TV advertisements, in particular considering the influence of the gender and age on it. In particular, we investigated the influence of the gender on the perception of a car advertisement (Experiment 1) and the influence of the factor age on a chewing gum commercial (Experiment 2). Experiment 1 results showed statistically significant higher approach values for the men group throughout the commercial. Results from Experiment 2 showed significant lower values by older adults for the spot, containing scenes not very enjoyed by them. In both studies, there was no statistical significant difference in the scene relative to the product offering between the experimental populations, suggesting the absence in our study of a bias towards the specific product in the evaluated populations. These evidences state the importance of the creativity in advertising, in order to attract the target population. PMID:27313602
A method to preserve trends in quantile mapping bias correction of climate modeled temperature
NASA Astrophysics Data System (ADS)
Grillakis, Manolis G.; Koutroulis, Aristeidis G.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-09-01
Bias correction of climate variables is a standard practice in climate change impact (CCI) studies. Various methodologies have been developed within the framework of quantile mapping. However, it is well known that quantile mapping may significantly modify the long-term statistics due to the time dependency of the temperature bias. Here, a method to overcome this issue without compromising the day-to-day correction statistics is presented. The methodology separates the modeled temperature signal into a normalized and a residual component relative to the modeled reference period climatology, in order to adjust the biases only for the former and preserve the signal of the later. The results show that this method allows for the preservation of the originally modeled long-term signal in the mean, the standard deviation and higher and lower percentiles of temperature. To illustrate the improvements, the methodology is tested on daily time series obtained from five Euro CORDEX regional climate models (RCMs).
Quantitative EEG analysis of the maturational changes associated with childhood absence epilepsy
NASA Astrophysics Data System (ADS)
Rosso, O. A.; Hyslop, W.; Gerlach, R.; Smith, R. L. L.; Rostas, J. A. P.; Hunter, M.
2005-10-01
This study aimed to examine the background electroencephalography (EEG) in children with childhood absence epilepsy, a condition whose presentation has strong developmental links. EEG hallmarks of absence seizure activity are widely accepted and there is recognition that the bulk of inter-ictal EEG in this group is normal to the naked eye. This multidisciplinary study aimed to use the normalized total wavelet entropy (NTWS) (Signal Processing 83 (2003) 1275) to examine the background EEG of those patients demonstrating absence seizure activity, and compare it with children without absence epilepsy. This calculation can be used to define the degree of order in a system, with higher levels of entropy indicating a more disordered (chaotic) system. Results were subjected to further statistical analyses of significance. Entropy values were calculated for patients versus controls. For all channels combined, patients with absence epilepsy showed (statistically significant) lower entropy values than controls. The size of the difference in entropy values was not uniform, with certain EEG electrodes consistently showing greater differences than others.
Relevant principal component analysis applied to the characterisation of Portuguese heather honey.
Martins, Rui C; Lopes, Victor V; Valentão, Patrícia; Carvalho, João C M F; Isabel, Paulo; Amaral, Maria T; Batista, Maria T; Andrade, Paula B; Silva, Branca M
2008-01-01
The main purpose of this study was the characterisation of 'Serra da Lousã' heather honey by using novel statistical methodology, relevant principal component analysis, in order to assess the correlations between production year, locality and composition. Herein, we also report its chemical composition in terms of sugars, glycerol and ethanol, and physicochemical parameters. Sugars profiles from 'Serra da Lousã' heather and 'Terra Quente de Trás-os-Montes' lavender honeys were compared and allowed the discrimination: 'Serra da Lousã' honeys do not contain sucrose, generally exhibit lower contents of turanose, trehalose and maltose and higher contents of fructose and glucose. Different localities from 'Serra da Lousã' provided groups of samples with high and low glycerol contents. Glycerol and ethanol contents were revealed to be independent of the sugars profiles. These data and statistical models can be very useful in the comparison and detection of adulterations during the quality control analysis of 'Serra da Lousã' honey.
NASA Astrophysics Data System (ADS)
Amor, T. A.; Russo, R.; Diez, I.; Bharath, P.; Zirovich, M.; Stramaglia, S.; Cortes, J. M.; de Arcangelis, L.; Chialvo, D. R.
2015-09-01
The brain exhibits a wide variety of spatiotemporal patterns of neuronal activity recorded using functional magnetic resonance imaging as the so-called blood-oxygenated-level-dependent (BOLD) signal. An active area of work includes efforts to best describe the plethora of these patterns evolving continuously in the brain. Here we explore the third-moment statistics of the brain BOLD signals in the resting state as a proxy to capture extreme BOLD events. We find that the brain signal exhibits typically nonzero skewness, with positive values for cortical regions and negative values for subcortical regions. Furthermore, the combined analysis of structural and functional connectivity demonstrates that relatively more connected regions exhibit activity with high negative skewness. Overall, these results highlight the relevance of recent results emphasizing that the spatiotemporal location of the relatively large-amplitude events in the BOLD time series contains relevant information to reproduce a number of features of the brain dynamics during resting state in health and disease.
Milla Tobarra, Marta; García Hermoso, Antonio; Lahoz García, Noelia; Notario Pacheco, Blanca; Lucas de la Cruz, Lidia; Pozuelo Carrascosa, Diana P; García Meseguer, María José; Martínez Vizcaíno, Vicente A
2018-01-19
beverage consumption constitutes a source of children's daily energy intake. Some authors have suggested that consumption of caloric beverages is higher in children with a low socioeconomic position because families limit their spending on healthy food in order to save money. the aim of this study was to explore the relationship between socioeconomic status and Spanish children's beverage consumption. a cross-sectional study was conducted in a sub-sample of 182 children (74 girls) aged 9-11 from the province of Cuenca (Spain). Beverage consumption was assessed using the YANA-C assessment tool, validated for HELENA study. Data for parental socioeconomic status were gathered by using self-reported occupation and education questions answered by parents and classified according to the scale proposed by the Spanish Society of Epidemiology. beverage intake was higher in children belonging to a middle-status family than in those of upper socioeconomic status (p = 0.037). The energy from beverages was similar in most water intake categories, except for water from beverages (p = 0.046). Regarding other beverages categories, middle-status children had higher consumption levels. In contrast, lower status children drank more fruit juices and skimmed milk. All of these do not show statistically significant differences. our study did not find significant associations between beverages consumption and socioeconomic status in children. In fact, intake for most beverage categories was higher in middle-status children than in both other socioeconomic groups. Future research is needed in order to identify this complex relation between socioeconomic inequality and beverage intake behavior.
Birth order and selected work-related personality variables.
Phillips, A S; Bedeian, A G; Mossholder, K W; Touliatos, J
1988-12-01
A possible link between birth order and various individual characteristics (e. g., intelligence, potential eminence, need for achievement, sociability) has been suggested by personality theorists such as Adler for over a century. The present study examines whether birth order is associated with selected personality variables that may be related to various work outcomes. 3 of 7 hypotheses were supported and the effect sizes for these were small. Firstborns scored significantly higher than later borns on measures of dominance, good impression, and achievement via conformity. No differences between firstborns and later borns were found in managerial potential, work orientation, achievement via independence, and sociability. The study's sample consisted of 835 public, government, and industrial accountants responding to a national US survey of accounting professionals. The nature of the sample may have been partially responsible for the results obtained. Its homogeneity may have caused any birth order effects to wash out. It can be argued that successful membership in the accountancy profession requires internalization of a set of prescribed rules and standards. It may be that accountants as a group are locked in to a behavioral framework. Any differentiation would result from spurious interpersonal differences, not from predictable birth-order related characteristics. A final interpretation is that birth order effects are nonexistent or statistical artifacts. Given the present data and particularistic sample, however, the authors have insufficient information from which to draw such a conclusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kravtsov, V.E., E-mail: kravtsov@ictp.it; Landau Institute for Theoretical Physics, 2 Kosygina st., 117940 Moscow; Yudson, V.I., E-mail: yudson@isan.troitsk.ru
Highlights: > Statistics of normalized eigenfunctions in one-dimensional Anderson localization at E = 0 is studied. > Moments of inverse participation ratio are calculated. > Equation for generating function is derived at E = 0. > An exact solution for generating function at E = 0 is obtained. > Relation of the generating function to the phase distribution function is established. - Abstract: The one-dimensional (1d) Anderson model (AM), i.e. a tight-binding chain with random uncorrelated on-site energies, has statistical anomalies at any rational point f=(2a)/({lambda}{sub E}) , where a is the lattice constant and {lambda}{sub E} is the demore » Broglie wavelength. We develop a regular approach to anomalous statistics of normalized eigenfunctions {psi}(r) at such commensurability points. The approach is based on an exact integral transfer-matrix equation for a generating function {Phi}{sub r}(u, {phi}) (u and {phi} have a meaning of the squared amplitude and phase of eigenfunctions, r is the position of the observation point). This generating function can be used to compute local statistics of eigenfunctions of 1d AM at any disorder and to address the problem of higher-order anomalies at f=p/q with q > 2. The descender of the generating function P{sub r}({phi}){identical_to}{Phi}{sub r}(u=0,{phi}) is shown to be the distribution function of phase which determines the Lyapunov exponent and the local density of states. In the leading order in the small disorder we derived a second-order partial differential equation for the r-independent ('zero-mode') component {Phi}(u, {phi}) at the E = 0 (f=1/2 ) anomaly. This equation is nonseparable in variables u and {phi}. Yet, we show that due to a hidden symmetry, it is integrable and we construct an exact solution for {Phi}(u, {phi}) explicitly in quadratures. Using this solution we computed moments I{sub m} = N< vertical bar {psi} vertical bar {sup 2m}> (m {>=} 1) for a chain of the length N {yields} {infinity} and found an essential difference between their m-behavior in the center-of-band anomaly and for energies outside this anomaly. Outside the anomaly the 'extrinsic' localization length defined from the Lyapunov exponent coincides with that defined from the inverse participation ratio ('intrinsic' localization length). This is not the case at the E = 0 anomaly where the extrinsic localization length is smaller than the intrinsic one. At E = 0 one also observes an anomalous enhancement of large moments compatible with existence of yet another, much smaller characteristic length scale.« less
A New Statistic for Evaluating Item Response Theory Models for Ordinal Data. CRESST Report 839
ERIC Educational Resources Information Center
Cai, Li; Monroe, Scott
2014-01-01
We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…
ERIC Educational Resources Information Center
Shukla, Divya; Dungsungnoen, Aj Pattaradanai
2016-01-01
Higher order thinking skills (HOTS) has portrayed immense industry demand and the major goal of educational institution in imparting education is to inculcate higher order thinking skills. This compiles and mandate the institutions and instructor to develop the higher order thinking skills among students in order to prepare them for effective…
ERIC Educational Resources Information Center
Center for Education Statistics (ED/OERI), Washington, DC.
Information on revenues and expenditures at U.S. colleges and universities are reported for fiscal years (FY) 1983, 1984, and 1985, based on findings from the Financial Statistics of Institutions of Higher Education survey, which is part of the Higher Education General Information Survey. Narrative and statistical information is presented on:…
Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions
Carlsson, Boris; Forssen, Christian; Fahlin Strömberg, D.; ...
2016-02-24
Chiral effective field theory ( ΧEFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, EFT is able to provide well-founded estimates of statistical and systematic uncertainties | although this unique advantage has not yet been fully exploited. We ll this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous t to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methodsmore » that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain e cient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to EFT and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling showing that statistical errors are in general small compared to systematic ones. In conclusion, we find that a simultaneous t to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector; in particlar when varying the cutoff in the chiral potentials. The methodology and results presented in this Paper open a new frontier for uncertainty quantification in ab initio nuclear theory.« less
Beltsios, Michail; Mavrogenis, Andreas F; Savvidou, Olga D; Karamanis, Eirineos; Kokkalis, Zinon T; Papagelopoulos, Panayiotis J
2014-07-01
To compare modular monolateral external fixators with single monolateral external fixators for the treatment of open and complex tibial shaft fractures, to determine the optimal construct for fracture union. A total of 223 tibial shaft fractures in 212 patients were treated with a monolateral external fixator from 2005 to 2011; 112 fractures were treated with a modular external fixator with ball-joints (group A), and 111 fractures were treated with a single external fixator without ball-joints (group B). The mean follow-up was 2.9 years. We retrospectively evaluated the operative time for fracture reduction with the external fixator, pain and range of motion of the knee and ankle joints, time to union, rate of malunion, reoperations and revisions of the external fixators, and complications. The time for fracture reduction was statistically higher in group B; the rate of union was statistically higher in group B; the rate of nonunion was statistically higher in group A; the mean time to union was statistically higher in group A; the rate of reoperations was statistically higher in group A; and the rate of revision of the external fixator was statistically higher in group A. Pain, range of motion of the knee and ankle joints, rates of delayed union, malunion and complications were similar. Although modular external fixators are associated with faster intraoperative fracture reduction with the external fixator, single external fixators are associated with significantly better rates of union and reoperations; the rates of delayed union, malunion and complications are similar.
Higher-order clustering in networks
NASA Astrophysics Data System (ADS)
Yin, Hao; Benson, Austin R.; Leskovec, Jure
2018-05-01
A fundamental property of complex networks is the tendency for edges to cluster. The extent of the clustering is typically quantified by the clustering coefficient, which is the probability that a length-2 path is closed, i.e., induces a triangle in the network. However, higher-order cliques beyond triangles are crucial to understanding complex networks, and the clustering behavior with respect to such higher-order network structures is not well understood. Here we introduce higher-order clustering coefficients that measure the closure probability of higher-order network cliques and provide a more comprehensive view of how the edges of complex networks cluster. Our higher-order clustering coefficients are a natural generalization of the traditional clustering coefficient. We derive several properties about higher-order clustering coefficients and analyze them under common random graph models. Finally, we use higher-order clustering coefficients to gain new insights into the structure of real-world networks from several domains.
Lomperis, A M
1991-10-01
The determinants of the severity of childhood malnutrition among a low income population in Cali, Colombia in 1974-76 were examined. Sections are devoted to the welfare maximization and household production model and methodology, the data set, the empirical results, the policy implications, and conclusions. The nutritional health of each preschooler is produced within the household with goods and time inputs (food, environmental sanitation, medical care, time invested in child care, and breastfeeding), and is conditioned by the state of household production technology (mother's literacy as a dummy variable -- version 1, and mother's level of schooling -- version 2) as well as by each child's sex, birth order, age, household size, and sociocultural setting. Constraints are total available income and time available (dummy variable). Reinhardt's version of the translog function is used to represent the production process. Household survey data were made available from a pilot study of a maternal and child health program (PRIMOPS) and includes 421 preschool children and 280 households, and food expenditure data for 197 children and 123 households. The main finding is that teaching Third World mothers to read holds the greatest promise of permanently improving the nutritional status of preschool children. The linear regression results show that the determinants of short-term nutritional status as reflected in weight for age (w/a) are the duration of breastfeeding, literacy, 1-3 years of schooling, and the available food in the household. The levels of significance are higher for version 2, but significance is achieved only with the lower levels of schooling. Birth order is statistically significant but weak and negative; i.e., higher birth orders are at higher risk of malnutrition. Long-term nutritional status is statistically significantly influenced by educational level, birth order, and food available, where older preschoolers are likely to experience stunting but not necessarily wasting. The last born suffers the most nutritionally. The proportion of time spent in child rearing vs. employment results needs further clarification. Breastfeeding effects are largely short term. Of the factors affecting children's nutritional status, the data show that food transfer approaches are not the most cost-effective means for solving chronic malnutrition. Implementing literacy programs would be a more successful strategy and lasts a lifetime. Breastfeeding must be for at least 4 months and preferably a year to show a significant improvement in nutrition, and does not eliminate the risk of malnutrition. Smaller families produce healthier children. A mother, who works part time or greater, increases income potential to provide for nutritional need; income and other factors such as literacy are critical determinants of preschool nutritional well being as supported by the findings of Wolfe and Behrman.
Skinner-Rusk unified formalism for higher-order systems
NASA Astrophysics Data System (ADS)
Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso
2012-07-01
The Lagrangian-Hamiltonian unified formalism of R. Skinner and R. Rusk was originally stated for autonomous dynamical systems in classical mechanics. It has been generalized for non-autonomous first-order mechanical systems, first-order and higher-order field theories, and higher-order autonomous systems. In this work we present a generalization of this formalism for higher-order non-autonomous mechanical systems.
Tavarez, Melissa M; Ayers, Brandon; Jeong, Jong H; Coombs, Carmen M; Thompson, Ann; Hickey, Robert W
2017-08-01
Higher resource utilization in the management of pediatric patients with undifferentiated vomiting and/or diarrhea does not correlate consistently with improved outcomes or quality of care. Performance feedback has been shown to change physician practice behavior and may be a mechanism to minimize practice variation. We aimed to evaluate the effects of e-mail-only, provider-level performance feedback on the ordering and admission practice variation of pediatric emergency physicians for patients presenting with undifferentiated vomiting and/or diarrhea. We conducted a prospective, quality improvement intervention and collected data over 3 consecutive fiscal years. The setting was a single, tertiary care pediatric emergency department. We collected admission and ordering practices data on 19 physicians during baseline, intervention, and postintervention periods. We provided physicians with quarterly e-mail-based performance reports during the intervention phase. We measured admission rate and created four categories for ordering practices: no orders, laboratory orders, pharmacy orders, and radiology orders. There was wide (two- to threefold) practice variation among physicians. Admission rates ranged from 15% to 30%, laboratory orders from 19% to 43%, pharmacy orders from 29% to 57%, and radiology orders from 11% to 30%. There was no statistically significant difference in the proportion of patients admitted or with radiology or pharmacy orders placed between preintervention, intervention, or postintervention periods (p = 0.58, p = 0.19, and p = 0.75, respectively). There was a significant but very small decrease in laboratory orders between the preintervention and postintervention periods. Performance feedback provided only via e-mail to pediatric emergency physicians on a quarterly basis does not seem to significantly impact management practices for patients with undifferentiated vomiting and/or diarrhea. © 2017 by the Society for Academic Emergency Medicine.
HYPOTHESIS SETTING AND ORDER STATISTIC FOR ROBUST GENOMIC META-ANALYSIS.
Song, Chi; Tseng, George C
2014-01-01
Meta-analysis techniques have been widely developed and applied in genomic applications, especially for combining multiple transcriptomic studies. In this paper, we propose an order statistic of p-values ( r th ordered p-value, rOP) across combined studies as the test statistic. We illustrate different hypothesis settings that detect gene markers differentially expressed (DE) "in all studies", "in the majority of studies", or "in one or more studies", and specify rOP as a suitable method for detecting DE genes "in the majority of studies". We develop methods to estimate the parameter r in rOP for real applications. Statistical properties such as its asymptotic behavior and a one-sided testing correction for detecting markers of concordant expression changes are explored. Power calculation and simulation show better performance of rOP compared to classical Fisher's method, Stouffer's method, minimum p-value method and maximum p-value method under the focused hypothesis setting. Theoretically, rOP is found connected to the naïve vote counting method and can be viewed as a generalized form of vote counting with better statistical properties. The method is applied to three microarray meta-analysis examples including major depressive disorder, brain cancer and diabetes. The results demonstrate rOP as a more generalizable, robust and sensitive statistical framework to detect disease-related markers.
Kooperman, Gabriel J.; Pritchard, Michael S.; Burt, Melissa A.; ...
2016-09-26
Changes in the character of rainfall are assessed using a holistic set of statistics based on rainfall frequency and amount distributions in climate change experiments with three conventional and superparameterized versions of the Community Atmosphere Model (CAM and SPCAM). Previous work has shown that high-order statistics of present-day rainfall intensity are significantly improved with superparameterization, especially in regions of tropical convection. Globally, the two modeling approaches project a similar future increase in mean rainfall, especially across the Inter-Tropical Convergence Zone (ITCZ) and at high latitudes, but over land, SPCAM predicts a smaller mean change than CAM. Changes in high-order statisticsmore » are similar at high latitudes in the two models but diverge at lower latitudes. In the tropics, SPCAM projects a large intensification of moderate and extreme rain rates in regions of organized convection associated with the Madden Julian Oscillation, ITCZ, monsoons, and tropical waves. In contrast, this signal is missing in all versions of CAM, which are found to be prone to predicting increases in the amount but not intensity of moderate rates. Predictions from SPCAM exhibit a scale-insensitive behavior with little dependence on horizontal resolution for extreme rates, while lower resolution (~2°) versions of CAM are not able to capture the response simulated with higher resolution (~1°). Furthermore, moderate rain rates analyzed by the “amount mode” and “amount median” are found to be especially telling as a diagnostic for evaluating climate model performance and tracing future changes in rainfall statistics to tropical wave modes in SPCAM.« less
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
Kourgialas, Nektarios N; Dokou, Zoi; Karatzas, George P
2015-05-01
The purpose of this study was to create a modeling management tool for the simulation of extreme flow events under current and future climatic conditions. This tool is a combination of different components and can be applied in complex hydrogeological river basins, where frequent flood and drought phenomena occur. The first component is the statistical analysis of the available hydro-meteorological data. Specifically, principal components analysis was performed in order to quantify the importance of the hydro-meteorological parameters that affect the generation of extreme events. The second component is a prediction-forecasting artificial neural network (ANN) model that simulates, accurately and efficiently, river flow on an hourly basis. This model is based on a methodology that attempts to resolve a very difficult problem related to the accurate estimation of extreme flows. For this purpose, the available measurements (5 years of hourly data) were divided in two subsets: one for the dry and one for the wet periods of the hydrological year. This way, two ANNs were created, trained, tested and validated for a complex Mediterranean river basin in Crete, Greece. As part of the second management component a statistical downscaling tool was used for the creation of meteorological data according to the higher and lower emission climate change scenarios A2 and B1. These data are used as input in the ANN for the forecasting of river flow for the next two decades. The final component is the application of a meteorological index on the measured and forecasted precipitation and flow data, in order to assess the severity and duration of extreme events. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nasr, Deena M; Flemming, Kelly D; Lanzino, Giuseppe; Cloft, Harry J; Kallmes, David F; Murad, Mohammad Hassan; Brinjikji, Waleed
2018-01-01
Vertebrobasilar non-saccular and dolichoectatic aneurysms (VBDA) are a rare type of aneurysm and are generally associated with poor prognosis. In order to better characterize the natural history of VBDAs, we performed a systematic review and meta-analysis of the literature to determine rates of mortality, growth, rupture, ischemia, and intraparenchymal hemorrhage. We searched the literature for longitudinal natural history studies of VBDA patients reporting clinical and imaging outcomes. Studied outcomes included annualized rates of growth, rupture, ischemic stroke, intracerebral hemorrhage (ICH), and mortality. We also studied the association between aneurysm morphology (dolichoectatic versus fusiform) and natural history. Meta-analysis was performed using a random-effects model using summary statistics from included studies. Fifteen studies with 827 patients and 5,093 patient-years were included. The overall annual mortality rate among patients with VBDAs was 13%/year (95% CI 8-19). Patients with fusiform aneurysms had a higher mortality rate than those with dolichoectatic aneurysms, but this did not reach statistical significance (12 vs. 8%, p = 0.11). The overall growth rate was 6%/year (95% CI 4-13). Patients with fusiform aneurysms had higher growth rates than those with dolichoectatic aneurysms (12 vs. 3%, p < 0.0001). The overall rupture rate was 3%/year (95% CI 1-5). Patients with fusiform aneurysms had higher rupture rates than those with dolichoectatic aneurysms (3 vs. 0%, p < 0.0001). The overall rate of ischemic stroke was 6%/year (95% CI 4-9). Patients with dolichoectatic aneurysms had higher ischemic stroke rates than those with fusiform aneurysms, but this did not reach statistical significance (8 vs. 4%, p = 0.13). The overall rate of ICH was 2%/year (95% CI 0-8) with no difference in rates between dolichoectatic and fusiform aneurysms (2 vs. 2%, p = 0.65). In general, the natural history of -VBDAs is poor. However, dolichoectatic and fusiform -VBDAs appear to have distinct natural histories with substantially higher growth and rupture associated with fusiform aneurysms. These findings suggest that these aneurysms should be considered separate entities. Further studies on the natural history of vertebrobasilar dolichoectatic and fusiform aneurysms with more complete follow-up are needed to better understand the risk factors for progression of these aneurysms. © 2018 S. Karger AG, Basel.
Statistical Entropy of the G-H-S Black Hole to All Orders in Planck Length
NASA Astrophysics Data System (ADS)
Sun, Hangbin; He, Feng; Huang, Hai
2012-02-01
Considering corrections to all orders in Planck length on the quantum state density from generalized uncertainty principle, we calculate the statistical entropy of the scalar field near the horizon of Garfinkle-Horowitz-Strominger (G-H-S) black hole without any artificial cutoff. It is shown that the entropy is proportional to the horizon area.
Effect of inlet conditions on the turbulent statistics in a buoyant jet
NASA Astrophysics Data System (ADS)
Kumar, Rajesh; Dewan, Anupam
2015-11-01
Buoyant jets have been the subject of research due to their technological and environmental importance in many physical processes, such as, spread of smoke and toxic gases from fires, release of gases form volcanic eruptions and industrial stacks. The nature of the flow near the source is initially laminar which quickly changes into turbulent flow. We present large eddy simulation of a buoyant jet. In the present study a careful investigation has been done to study the influence of inlet conditions at the source on the turbulent statistics far from the source. It has been observed that the influence of the initial conditions on the second-order buoyancy terms extends further in the axial direction from the source than their influence on the time-averaged flow and second-order velocity statistics. We have studied the evolution of vortical structures in the buoyant jet. It has been shown that the generation of helical vortex rings in the vicinity of the source around a laminar core could be the reason for the larger influence of the inlet conditions on the second-order buoyancy terms as compared to the second-order velocity statistics.
Statistical physics of nucleosome positioning and chromatin structure
NASA Astrophysics Data System (ADS)
Morozov, Alexandre
2012-02-01
Genomic DNA is packaged into chromatin in eukaryotic cells. The fundamental building block of chromatin is the nucleosome, a 147 bp-long DNA molecule wrapped around the surface of a histone octamer. Arrays of nucleosomes are positioned along DNA according to their sequence preferences and folded into higher-order chromatin fibers whose structure is poorly understood. We have developed a framework for predicting sequence-specific histone-DNA interactions and the effective two-body potential responsible for ordering nucleosomes into regular higher-order structures. Our approach is based on the analogy between nucleosomal arrays and a one-dimensional fluid of finite-size particles with nearest-neighbor interactions. We derive simple rules which allow us to predict nucleosome occupancy solely from the dinucleotide content of the underlying DNA sequences.Dinucleotide content determines the degree of stiffness of the DNA polymer and thus defines its ability to bend into the nucleosomal superhelix. As expected, the nucleosome positioning rules are universal for chromatin assembled in vitro on genomic DNA from baker's yeast and from the nematode worm C.elegans, where nucleosome placement follows intrinsic sequence preferences and steric exclusion. However, the positioning rules inferred from in vivo C.elegans chromatin are affected by global nucleosome depletion from chromosome arms relative to central domains, likely caused by the attachment of the chromosome arms to the nuclear membrane. Furthermore, intrinsic nucleosome positioning rules are overwritten in transcribed regions, indicating that chromatin organization is actively managed by the transcriptional and splicing machinery.
NASA Astrophysics Data System (ADS)
Croft, Stephen; Favalli, Andrea
2017-10-01
Neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where the next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, Stephen; Favalli, Andrea
Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less
Croft, Stephen; Favalli, Andrea
2017-07-16
Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less
The Meaning of Higher-Order Factors in Reflective-Measurement Models
ERIC Educational Resources Information Center
Eid, Michael; Koch, Tobias
2014-01-01
Higher-order factor analysis is a widely used approach for analyzing the structure of a multidimensional test. Whenever first-order factors are correlated researchers are tempted to apply a higher-order factor model. But is this reasonable? What do the higher-order factors measure? What is their meaning? Willoughby, Holochwost, Blanton, and Blair…
Statistical physics of the symmetric group.
Williams, Mobolaji
2017-04-01
Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.
Statistical physics of the symmetric group
NASA Astrophysics Data System (ADS)
Williams, Mobolaji
2017-04-01
Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.
Perturbative Gaussianizing transforms for cosmological fields
NASA Astrophysics Data System (ADS)
Hall, Alex; Mead, Alexander
2018-01-01
Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.
NASA Astrophysics Data System (ADS)
Kononova, Olga; Jones, Lee; Barsegov, V.
2013-09-01
Cooperativity is a hallmark of proteins, many of which show a modular architecture comprising discrete structural domains. Detecting and describing dynamic couplings between structural regions is difficult in view of the many-body nature of protein-protein interactions. By utilizing the GPU-based computational acceleration, we carried out simulations of the protein forced unfolding for the dimer WW - WW of the all-β-sheet WW domains used as a model multidomain protein. We found that while the physically non-interacting identical protein domains (WW) show nearly symmetric mechanical properties at low tension, reflected, e.g., in the similarity of their distributions of unfolding times, these properties become distinctly different when tension is increased. Moreover, the uncorrelated unfolding transitions at a low pulling force become increasingly more correlated (dependent) at higher forces. Hence, the applied force not only breaks "the mechanical symmetry" but also couples the physically non-interacting protein domains forming a multi-domain protein. We call this effect "the topological coupling." We developed a new theory, inspired by order statistics, to characterize protein-protein interactions in multi-domain proteins. The method utilizes the squared-Gaussian model, but it can also be used in conjunction with other parametric models for the distribution of unfolding times. The formalism can be taken to the single-molecule experimental lab to probe mechanical cooperativity and domain communication in multi-domain proteins.
Impact of neutral density fluctuations on gas puff imaging diagnostics
NASA Astrophysics Data System (ADS)
Wersal, C.; Ricci, P.
2017-11-01
A three-dimensional turbulence simulation of the SOL and edge regions of a toroidally limited tokamak is carried out. The simulation couples self-consistently the drift-reduced two-fluid Braginskii equations to a kinetic equation for neutral atoms. A diagnostic neutral gas puff on the low-field side midplane is included and the impact of neutral density fluctuations on D_α light emission investigated. We find that neutral density fluctuations affect the D_α emission. In particular, at a radial distance from the gas puff smaller than the neutral mean free path, neutral density fluctuations are anti-correlated with plasma density, electron temperature, and D_α fluctuations. It follows that the neutral fluctuations reduce the D_α emission in most of the observed region and, therefore, have to be taken into account when interpreting the amplitude of the D_α emission. On the other hand, higher order statistical moments (skewness, kurtosis) and turbulence characteristics (such as correlation length, or the autocorrelation time) are not significantly affected by the neutral fluctuations. At distances from the gas puff larger than the neutral mean free path, a non-local shadowing effect influences the neutral density fluctuations. There, the D_α fluctuations are correlated with the neutral density fluctuations, and the high-order statistical moments and measurements of other turbulence properties are strongly affected by the neutral density fluctuations.
Migration and prostate cancer: an international perspective.
Angwafo, F. F.
1998-01-01
There are intra- and interracial differences in prostate cancer incidence and mortality rates worldwide. The environment and migration patterns seem to influence the disparities in cancer statistics. The lowest incidence rate is recorded in Chinese, followed by other Asians, South Americans, southern Europeans, and northern Europeans, in ascending order. However, people of African descent have the highest incidence so far. Until recently, African Americans in Alameda County (California) in the United States had the highest reported incidence (160/1000,000). An incidence of 314/100,000 recently was reported in African Caribbeans from Jamaica. These high rates contrast with the low incidence rates reported in continental (Sub-Saharan) Africa. Angwafo et al have reported higher age-adjusted incidence rates in Yaounde, Cameroon (93.8/100,000). They highlighted the importance of diagnostic methodology, availability of and access to diagnostic techniques and trained manpower, and adjustments for the age distribution of populations when comparing incidence rates between regions. The great disparity in cancer statistics over large geographic areas and races has oriented studies toward genes and gene products susceptible to environmental risk factors such as diet, ultraviolet rays, and cadmium, which may be associated with or causative of prostate cancer. Randomized studies on suspected risk factors and promoters of prostate cancer need to be conducted worldwide. However, caution is in order when inferences are made comparing populations with access to health care to those without. PMID:9828589
NASA Astrophysics Data System (ADS)
Merrill, Alison Saricks
The purpose of this quasi-experimental quantitative mixed design study was to compare the effectiveness of brain-based teaching strategies versus a traditional lecture format in the acquisition of higher order cognition as determined by test scores. A second purpose was to elicit student feedback about the two teaching approaches. The design was a 2 x 2 x 2 factorial design study with repeated measures on the last factor. The independent variables were type of student, teaching method, and a within group change over time. Dependent variables were a between group comparison of pre-test, post-test gain scores and a within and between group comparison of course examination scores. A convenience sample of students enrolled in medical-surgical nursing was used. One group (n=36) was made up of traditional students and the other group (n=36) consisted of second-degree students. Four learning units were included in this study. Pre- and post-tests were given on the first two units. Course examinations scores from all four units were compared. In one cohort two of the units were taught via lecture format and two using constructivist activities. These methods were reversed for the other cohort. The conceptual basis for this study derives from neuroscience and cognitive psychology. Learning is defined as the growth of new dendrites. Cognitive psychologists view learning as a constructive activity in which new knowledge is built on an internal foundation of existing knowledge. Constructivist teaching strategies are designed to stimulate the brain's natural learning ability. There was a statistically significant difference based on type of teaching strategy (t = -2.078, df = 270, p = .039, d = .25)) with higher mean scores on the examinations covering brain-based learning units. There was no statistical significance based on type of student. Qualitative data collection was conducted in an on-line forum at the end of the semester. Students had overall positive responses about the constructivist activities. Major themes were described. Constructivist strategies help bridge the gap between neurological and cognitive sciences and classroom teaching and learning. A variety of implications for nursing educators are outlined as well as directions for future research.
NASA Astrophysics Data System (ADS)
Matthaios, Vasileios N.; Triantafyllou, Athanasios G.; Albanis, Triantafyllos A.; Sakkas, Vasileios; Garas, Stelios
2018-05-01
Atmospheric modeling is considered an important tool with several applications such as prediction of air pollution levels, air quality management, and environmental impact assessment studies. Therefore, evaluation studies must be continuously made, in order to improve the accuracy and the approaches of the air quality models. In the present work, an attempt is made to examine the air pollution model (TAPM) efficiency in simulating the surface meteorology, as well as the SO2 concentrations in a mountainous complex terrain industrial area. Three configurations under different circumstances, firstly with default datasets, secondly with data assimilation, and thirdly with updated land use, ran in order to investigate the surface meteorology for a 3-year period (2009-2011) and one configuration applied to predict SO2 concentration levels for the year of 2011.The modeled hourly averaged meteorological and SO2 concentration values were statistically compared with those from five monitoring stations across the domain to evaluate the model's performance. Statistical measures showed that the surface temperature and relative humidity are predicted well in all three simulations, with index of agreement (IOA) higher than 0.94 and 0.70 correspondingly, in all monitoring sites, while an overprediction of extreme low temperature values is noted, with mountain altitudes to have an important role. However, the results also showed that the model's performance is related to the configuration regarding the wind. TAPM default dataset predicted better the wind variables in the center of the simulation than in the boundaries, while improvement in the boundary horizontal winds implied the performance of TAPM with updated land use. TAPM assimilation predicted the wind variables fairly good in the whole domain with IOA higher than 0.83 for the wind speed and higher than 0.85 for the horizontal wind components. Finally, the SO2 concentrations were assessed by the model with IOA varied from 0.37 to 0.57, mostly dependent on the grid/monitoring station of the simulated domain. The present study can be used, with relevant adaptations, as a user guideline for future conducting simulations in mountainous complex terrain.
Lin, Ju; Li, Jie; Li, Xiaolei; Wang, Ning
2016-10-01
An acoustic reciprocity theorem is generalized, for a smoothly varying perturbed medium, to a hierarchy of reciprocity theorems including higher-order derivatives of acoustic fields. The standard reciprocity theorem is the first member of the hierarchy. It is shown that the conservation of higher-order interaction quantities is related closely to higher-order derivative distributions of perturbed media. Then integral reciprocity theorems are obtained by applying Gauss's divergence theorem, which give explicit integral representations connecting higher-order interactions and higher-order derivative distributions of perturbed media. Some possible applications to an inverse problem are also discussed.
Ordering statistics of four random walkers on a line
NASA Astrophysics Data System (ADS)
Helenbrook, Brian; ben-Avraham, Daniel
2018-05-01
We study the ordering statistics of four random walkers on the line, obtaining a much improved estimate for the long-time decay exponent of the probability that a particle leads to time t , Plead(t ) ˜t-0.91287850 , and that a particle lags to time t (never assumes the lead), Plag(t ) ˜t-0.30763604 . Exponents of several other ordering statistics for N =4 walkers are obtained to eight-digit accuracy as well. The subtle correlations between n walkers that lag jointly, out of a field of N , are discussed: for N =3 there are no correlations and Plead(t ) ˜Plag(t) 2 . In contrast, our results rule out the possibility that Plead(t ) ˜Plag(t) 3 for N =4 , although the correlations in this borderline case are tiny.
Correlative weighted stacking for seismic data in the wavelet domain
Zhang, S.; Xu, Y.; Xia, J.; ,
2004-01-01
Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.
More than words: Adults learn probabilities over categories and relationships between them.
Hudson Kam, Carla L
2009-04-01
This study examines whether human learners can acquire statistics over abstract categories and their relationships to each other. Adult learners were exposed to miniature artificial languages containing variation in the ordering of the Subject, Object, and Verb constituents. Different orders (e.g. SOV, VSO) occurred in the input with different frequencies, but the occurrence of one order versus another was not predictable. Importantly, the language was constructed such that participants could only match the overall input probabilities if they were tracking statistics over abstract categories, not over individual words. At test, participants reproduced the probabilities present in the input with a high degree of accuracy. Closer examination revealed that learner's were matching the probabilities associated with individual verbs rather than the category as a whole. However, individual nouns had no impact on word orders produced. Thus, participants learned the probabilities of a particular ordering of the abstract grammatical categories Subject and Object associated with each verb. Results suggest that statistical learning mechanisms are capable of tracking relationships between abstract linguistic categories in addition to individual items.
NASA Astrophysics Data System (ADS)
Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.
2017-05-01
The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.
2010-02-25
gave significantly higher emphasis ratings (i.e., a statistically significant difference between SOF operators and SOF leaders). Responses were made...i.e., a statistically significant difference between SOF operators and SOF leaders). Responses were made on the following scale: 1 = No emphasis, 2...missions?” Means with an asterisk (*) indicate that the group gave significantly higher emphasis ratings (i.e., a statistically significant difference
Haque, Md Moinul; Pramanik, Habibur Rahman; Biswas, Jiban Krishna; Iftekharuddaula, K M; Hasanuzzaman, Mirza
2015-01-01
Hybrid rice varieties have higher yield potential over inbred varieties. This improvement is not always translated to the grain yield and its physiological causes are still unclear. In order to clarify it, two field experiments were conducted including two popular indica hybrids (BRRI hybrid dhan2 and Heera2) and one elite inbred (BRRI dhan45) rice varieties. Leaf area index, chlorophyll status, and photosynthetic rate of flag leaf, postheading crop growth rate, shoot reserve translocation, source-sink relation and yield, and its attributes of each variety were comprehensively analyzed. Both hybrid varieties outyielded the inbred. However, the hybrids and inbred varieties exhibited statistically identical yield in late planting. Both hybrids accumulated higher amount of biomass before heading and exhibited greater remobilization of assimilates to the grain in early plantings compared to the inbred variety. Filled grain (%) declined significantly at delayed planting in the hybrids compared to elite inbred due to increased temperature impaired-inefficient transport of assimilates. Flag leaf photosynthesis parameters were higher in the hybrid varieties than those of the inbred variety. Results suggest that greater remobilization of shoot reserves to the grain rendered higher yield of hybrid rice varieties.
Kien, C Lawrence; Matthews, Dwight E; Poynter, Matthew E; Bunn, Janice Y; Fukagawa, Naomi K; Crain, Karen I; Ebenstein, David B; Tarleton, Emily K; Stevens, Robert D; Koves, Timothy R; Muoio, Deborah M
2015-09-01
Palmitic acid (PA) is associated with higher blood concentrations of medium-chain acylcarnitines (MCACs), and we hypothesized that PA may inhibit progression of FA β-oxidation. Using a cross-over design, 17 adults were fed high PA (HPA) and low PA/high oleic acid (HOA) diets, each for 3 weeks. The [1-(13)C]PA and [13-(13)C]PA tracers were administered with food in random order with each diet, and we assessed PA oxidation (PA OX) and serum AC concentration to determine whether a higher PA intake promoted incomplete PA OX. Dietary PA was completely oxidized during the HOA diet, but only about 40% was oxidized during the HPA diet. The [13-(13)C]PA/[1-(13)C]PA ratio of PA OX had an approximate value of 1.0 for either diet, but the ratio of the serum concentrations of MCACs to long-chain ACs (LCACs) was significantly higher during the HPA diet. Thus, direct measurement of PA OX did not confirm that the HPA diet caused incomplete PA OX, despite the modest, but statistically significant, increase in the ratio of MCACs to LCACs in blood. Copyright © 2015 by the American Society for Biochemistry and Molecular Biology, Inc.
Kien, C. Lawrence; Matthews, Dwight E.; Poynter, Matthew E.; Bunn, Janice Y.; Fukagawa, Naomi K.; Crain, Karen I.; Ebenstein, David B.; Tarleton, Emily K.; Stevens, Robert D.; Koves, Timothy R.; Muoio, Deborah M.
2015-01-01
Palmitic acid (PA) is associated with higher blood concentrations of medium-chain acylcarnitines (MCACs), and we hypothesized that PA may inhibit progression of FA β-oxidation. Using a cross-over design, 17 adults were fed high PA (HPA) and low PA/high oleic acid (HOA) diets, each for 3 weeks. The [1-13C]PA and [13-13C]PA tracers were administered with food in random order with each diet, and we assessed PA oxidation (PA OX) and serum AC concentration to determine whether a higher PA intake promoted incomplete PA OX. Dietary PA was completely oxidized during the HOA diet, but only about 40% was oxidized during the HPA diet. The [13-13C]PA/[1-13C]PA ratio of PA OX had an approximate value of 1.0 for either diet, but the ratio of the serum concentrations of MCACs to long-chain ACs (LCACs) was significantly higher during the HPA diet. Thus, direct measurement of PA OX did not confirm that the HPA diet caused incomplete PA OX, despite the modest, but statistically significant, increase in the ratio of MCACs to LCACs in blood. PMID:26156077
Cavazzotto, Timothy Gustavo; Brasil, Marcos Roberto; Oliveira, Vinicius Machado; da Silva, Schelyne Ribas; Ronque, Enio Ricardo V.; Queiroga, Marcos Roberto; Serassuelo, Helio
2014-01-01
Objective: To investigate the agreement between two international criteria for classification of children and adolescents nutritional status. Methods: The study included 778 girls and 863 boys aged from six to 13 years old. Body mass and height were measured and used to calculate the body mass index. Nutritional status was classified according to the cut-off points defined by the World Health Organization and the International Obesity Task Force. The agreement was evaluated using Kappa statistic and weighted Kappa. Results: In order to classify the nutritional status, the agreement between the criteria was higher for the boys (Kappa 0.77) compared to girls (Kappa 0.61). The weighted Kappa was also higher for boys (0.85) in comparison to girls (0.77). Kappa index varied according to age. When the nutritional status was classified in only two categories - appropriate (thinness + accentuated thinness + eutrophy) and overweight (overweight + obesity + severe obesity) -, the Kappa index presented higher values than those related to the classification in six categories. Conclusions: A substantial agreement was observed between the criteria, being higher in males and varying according to the age. PMID:24676189
Linear and Order Statistics Combiners for Pattern Classification
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)
2001-01-01
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.