Sample records for gaussian standard deviations

  1. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  2. Analytical probabilistic proton dose calculation and range uncertainties

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Hennig, P.; Oelfke, U.

    2014-03-01

    We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.

  3. How to model moon signals using 2-dimensional Gaussian function: Classroom activity for measuring nighttime cloud cover

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Lagrosas, N.

    2016-12-01

    Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.

  4. Skewness and kurtosis analysis for non-Gaussian distributions

    NASA Astrophysics Data System (ADS)

    Celikoglu, Ahmet; Tirnakli, Ugur

    2018-06-01

    In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.

  5. Tests for Gaussianity of the MAXIMA-1 cosmic microwave background map.

    PubMed

    Wu, J H; Balbi, A; Borrill, J; Ferreira, P G; Hanany, S; Jaffe, A H; Lee, A T; Rabii, B; Richards, P L; Smoot, G F; Stompor, R; Winant, C D

    2001-12-17

    Gaussianity of the cosmological perturbations is one of the key predictions of standard inflation, but it is violated by other models of structure formation such as cosmic defects. We present the first test of the Gaussianity of the cosmic microwave background (CMB) on subdegree angular scales, where deviations from Gaussianity are most likely to occur. We apply the methods of moments, cumulants, the Kolmogorov test, the chi(2) test, and Minkowski functionals in eigen, real, Wiener-filtered, and signal-whitened spaces, to the MAXIMA-1 CMB anisotropy data. We find that the data, which probe angular scales between 10 arcmin and 5 deg, are consistent with Gaussianity. These results show consistency with the standard inflation and place constraints on the existence of cosmic defects.

  6. Joint Entropy for Space and Spatial Frequency Domains Estimated from Psychometric Functions of Achromatic Discrimination

    PubMed Central

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158

  7. Joint entropy for space and spatial frequency domains estimated from psychometric functions of achromatic discrimination.

    PubMed

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.

  8. Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data

    NASA Astrophysics Data System (ADS)

    Shulenin, V. P.

    2016-10-01

    Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.

  9. Consistency relation and non-Gaussianity in a Galileon inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asadi, Kosar; Nozari, Kourosh, E-mail: k.asadi@stu.umz.ac.ir, E-mail: knozari@umz.ac.ir

    2016-12-01

    We study a particular Galileon inflation in the light of Planck2015 observational data in order to constraint the model parameter space. We study the spectrum of the primordial modes of the density perturbations by expanding the action up to the second order in perturbations. Then we pursue by expanding the action up to the third order and find the three point correlation functions to find the amplitude of the non-Gaussianity of the primordial perturbations in this setup. We study the amplitude of the non-Gaussianity both in equilateral and orthogonal configurations and test the model with recent observational data. Our analysismore » shows that for some ranges of the non-minimal coupling parameter, the model is consistent with observation and it is also possible to have large non-Gaussianity which would be observable by future improvements in experiments. Moreover, we obtain the tilt of the tensor power spectrum and test the standard inflationary consistency relation ( r = −8 n {sub T} ) against the latest bounds from the Planck2015 dataset. We find a slight deviation from the standard consistency relation in this setup. Nevertheless, such a deviation seems not to be sufficiently remarkable to be detected confidently.« less

  10. Evaluation and validity of a LORETA normative EEG database.

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-04-01

    To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.

  11. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  12. Characterization of difference of Gaussian filters in the detection of mammographic regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catarious, David M. Jr.; Baydush, Alan H.; Floyd, Carey E. Jr.

    2006-11-15

    In this article, we present a characterization of the effect of difference of Gaussians (DoG) filters in the detection of mammographic regions. DoG filters have been used previously in mammographic mass computer-aided detection (CAD) systems. As DoG filters are constructed from the subtraction of two bivariate Gaussian distributions, they require the specification of three parameters: the size of the filter template and the standard deviations of the constituent Gaussians. The influence of these three parameters in the detection of mammographic masses has not been characterized. In this work, we aim to determine how the parameters affect (1) the physical descriptorsmore » of the detected regions (2) the true and false positive rates, and (3) the classification performance of the individual descriptors. To this end, 30 DoG filters are created from the combination of three template sizes and four values for each of the Gaussians' standard deviations. The filters are used to detect regions in a study database of 181 craniocaudal-view mammograms extracted from the Digital Database for Screening Mammography. To describe the physical characteristics of the identified regions, morphological and textural features are extracted from each of the detected regions. Differences in the mean values of the features caused by altering the DoG parameters are examined through statistical and empirical comparisons. The parameters' effects on the true and false positive rate are determined by examining the mean malignant sensitivities and false positives per image (FPpI). Finally, the effect on the classification performance is described by examining the variation in FPpI at the point where 81% of the malignant masses in the study database are detected. Overall, the findings of the study indicate that increasing the standard deviations of the Gaussians used to construct a DoG filter results in a dramatic decrease in the number of regions identified at the expense of missing a small number of malignancies. The sharp reduction in the number of identified regions allowed the identification of textural differences between large and small mammographic regions. We find that the classification performances of the features that achieve the lowest average FPpI are influenced by all three of the parameters.« less

  13. Dispersion in Rectangular Networks: Effective Diffusivity and Large-Deviation Rate Function

    NASA Astrophysics Data System (ADS)

    Tzella, Alexandra; Vanneste, Jacques

    2016-09-01

    The dispersion of a diffusive scalar in a fluid flowing through a network has many applications including to biological flows, porous media, water supply, and urban pollution. Motivated by this, we develop a large-deviation theory that predicts the evolution of the concentration of a scalar released in a rectangular network in the limit of large time t ≫1 . This theory provides an approximation for the concentration that remains valid for large distances from the center of mass, specifically for distances up to O (t ) and thus much beyond the O (t1 /2) range where a standard Gaussian approximation holds. A byproduct of the approach is a closed-form expression for the effective diffusivity tensor that governs this Gaussian approximation. Monte Carlo simulations of Brownian particles confirm the large-deviation results and demonstrate their effectiveness in describing the scalar distribution when t is only moderately large.

  14. Potential of discrete Gaussian edge feathering method for improving abutment dosimetry in eMLC-delivered segmented-field electron conformal therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eley, John G.; Hogstrom, Kenneth R.; Matthews, Kenneth L.

    2011-12-15

    Purpose: The purpose of this work was to investigate the potential of discrete Gaussian edge feathering of the higher energy electron fields for improving abutment dosimetry in the planning volume when using an electron multileaf collimator (eMLC) to deliver segmented-field electron conformal therapy (ECT). Methods: A discrete (five-step) Gaussian edge spread function was used to match dose penumbras of differing beam energies (6-20 MeV) at a specified depth in a water phantom. Software was developed to define the leaf eMLC positions of an eMLC that most closely fit each electron field shape. The effect of 1D edge feathering of themore » higher energy field on dose homogeneity was computed and measured for segmented-field ECT treatment plans for three 2D PTVs in a water phantom, i.e., depth from the water surface to the distal PTV surface varied as a function of the x-axis (parallel to leaf motion) and remained constant along the y-axis (perpendicular to leaf motion). Additionally, the effect of 2D edge feathering was computed and measured for one radially symmetric, 3D PTV in a water phantom, i.e., depth from the water surface to the distal PTV surface varied as a function of both axes. For the 3D PTV, the feathering scheme was evaluated for 0.1-1.0-cm leaf widths. Dose calculations were performed using the pencil beam dose algorithm in the Pinnacle{sup 3} treatment planning system. Dose verification measurements were made using a prototype eMLC (1-cm leaf width). Results: 1D discrete Gaussian edge feathering reduced the standard deviation of dose in the 2D PTVs by 34, 34, and 39%. In the 3D PTV, the broad leaf width (1 cm) of the eMLC hindered the 2D application of the feathering solution to the 3D PTV, and the standard deviation of dose increased by 10%. However, 2D discrete Gaussian edge feathering with simulated eMLC leaf widths of 0.1-0.5 cm reduced the standard deviation of dose in the 3D PTV by 33-28%, respectively. Conclusions: A five-step discrete Gaussian edge spread function applied in 2D improves the abutment dosimetry but requires an eMLC leaf resolution better than 1 cm.« less

  15. Robust Gaussian Graphical Modeling via l1 Penalization

    PubMed Central

    Sun, Hokeun; Li, Hongzhe

    2012-01-01

    Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775

  16. Modeling Multi-Variate Gaussian Distributions and Analysis of Higgs Boson Couplings with the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration

    2017-01-01

    Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students

  17. Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles

    NASA Astrophysics Data System (ADS)

    Kobayashi, Naoki; Yamazaki, Hiroshi

    2018-01-01

    We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.

  18. Quantifying relative importance: Computing standardized effects in models with binary outcomes

    USGS Publications Warehouse

    Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.

    2018-01-01

    Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.

  19. Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Steinwolf, Alexander

    2005-01-01

    The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.

  20. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  1. Robust non-Gaussian statistics and long-range correlation of total ozone

    NASA Astrophysics Data System (ADS)

    Toumi, R.; Syroka, J.; Barnes, C.; Lewis, P.

    2001-01-01

    Three long-term total ozone time series at Camborne, Lerwick and Arosa are examined for their statistical properties. Non-Gaussian behaviour is seen for all locations. There are large interannual fluctuations in the higher moments of the probability distribution. However, only the mean for all stations and summer standard deviation at Lerwick show significant trends. This suggests that there has been no long-term change in the stratospheric circulation, but there are decadal variations. The time series can be also characterised as scale invariant with a Hurst exponent of about 0.8 for all three sites. The Arosa time series was found to be weakly intermittent, in agreement with the non-Gaussian characteristics of the data set

  2. Analysis of a first order phase locked loop in the presence of Gaussian noise

    NASA Technical Reports Server (NTRS)

    Blasche, P. R.

    1977-01-01

    A first-order digital phase locked loop is analyzed by application of a Markov chain model. Steady state loop error probabilities, phase standard deviation, and mean loop transient times are determined for various input signal to noise ratios. Results for direct loop simulation are presented for comparison.

  3. USING THE HERMITE POLYNOMIALS IN RADIOLOGICAL MONITORING NETWORKS.

    PubMed

    Benito, G; Sáez, J C; Blázquez, J B; Quiñones, J

    2018-03-15

    The most interesting events in Radiological Monitoring Network correspond to higher values of H*(10). The higher doses cause skewness in the probability density function (PDF) of the records, which there are not Gaussian anymore. Within this work the probability of having a dose >2 standard deviations is proposed as surveillance of higher doses. Such probability is estimated by using the Hermite polynomials for reconstructing the PDF. The result is that the probability is ~6 ± 1%, much >2.5% corresponding to Gaussian PDFs, which may be of interest in the design of alarm level for higher doses.

  4. MARSnet: Mission-aware Autonomous Radar Sensor Network for Future Combat Systems

    DTIC Science & Technology

    2008-07-31

    Deviation Consider the case of a Gaussian primary MF having a fixed mean, ml, and an uncertain standard deviation that takes on values in [ai, 2]’ i.e...fuzzy set, so thatR k --* AXk (k = 1,... ,p), the upper and lower MFs of Pkk merge into one MF, AXk (Xk), in which case Theorem 1 simplifies to: Corollary...the upper and lower MFs of A k(Xk) merge into one crisp value, namely 1, in which case Theorem 1 simplifies further to: Corollary 2 In a favor weak

  5. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  7. The semantic Stroop effect: An ex-Gaussian analysis.

    PubMed

    White, Darcy; Risko, Evan F; Besner, Derek

    2016-10-01

    Previous analyses of the standard Stroop effect (which typically uses color words that form part of the response set) have documented effects on mean reaction times in hundreds of experiments in the literature. Less well known is the fact that ex-Gaussian analyses reveal that such effects are seen in (a) the mean of the normal distribution (mu), as well as in (b) the standard deviation of the normal distribution (sigma) and (c) the tail (tau). No ex-Gaussian analysis exists in the literature with respect to the semantically based Stroop effect (which contrasts incongruent color-associated words with, e.g., neutral controls). In the present experiments, we investigated whether the semantically based Stroop effect is also seen in the three ex-Gaussian parameters. Replicating previous reports, color naming was slower when the color was carried by an irrelevant (but incongruent) color-associated word (e.g., sky, tomato) than when the control items consisted of neutral words (e.g., keg, palace) in each of four experiments. An ex-Gaussian analysis revealed that this semantically based Stroop effect was restricted to the arithmetic mean and mu; no semantic Stroop effect was observed in tau. These data are consistent with the views (1) that there is a clear difference in the source of the semantic Stroop effect, as compared to the standard Stroop effect (evidenced by the presence vs. absence of an effect on tau), and (2) that interference associated with response competition on incongruent trials in tau is absent in the semantic Stroop effect.

  8. Acoustic response variability in automotive vehicles

    NASA Astrophysics Data System (ADS)

    Hills, E.; Mace, B. R.; Ferguson, N. S.

    2009-03-01

    A statistical analysis of a series of measurements of the audio-frequency response of a large set of automotive vehicles is presented: a small hatchback model with both a three-door (411 vehicles) and five-door (403 vehicles) derivative and a mid-sized family five-door car (316 vehicles). The sets included vehicles of various specifications, engines, gearboxes, interior trim, wheels and tyres. The tests were performed in a hemianechoic chamber with the temperature and humidity recorded. Two tests were performed on each vehicle and the interior cabin noise measured. In the first, the excitation was acoustically induced by sets of external loudspeakers. In the second test, predominantly structure-borne noise was induced by running the vehicle at a steady speed on a rough roller. For both types of excitation, it is seen that the effects of temperature are small, indicating that manufacturing variability is larger than that due to temperature for the tests conducted. It is also observed that there are no significant outlying vehicles, i.e. there are at most only a few vehicles that consistently have the lowest or highest noise levels over the whole spectrum. For the acoustically excited tests, measured 1/3-octave noise reduction levels typically have a spread of 5 dB or so and the normalised standard deviation of the linear data is typically 0.1 or higher. Regarding the statistical distribution of the linear data, a lognormal distribution is a somewhat better fit than a Gaussian distribution for lower 1/3-octave bands, while the reverse is true at higher frequencies. For the distribution of the overall linear levels, a Gaussian distribution is generally the most representative. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the acoustically induced airborne cabin noise is best described by a Gaussian distribution with a normalised standard deviation between 0.09 and 0.145. There is generally considerable variability in the roller-induced noise, with individual 1/3-octave levels varying by typically 15 dB or so and with the normalised standard deviation being in the range 0.2-0.35 or more. These levels are strongly affected by wheel rim and tyre constructions. For vehicles with nominally identical wheel rims and tyres, the normalised standard deviation for 1/3-octave levels in the frequency range 40-600 Hz is 0.2 or so. The distribution of the linear roller-induced noise level in each 1/3-octave frequency band is well described by a lognormal distribution as is the overall level. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the roller-induced road noise is best described by a lognormal distribution with a normalised standard deviation of 0.2 or so, but that this can be significantly affected by the tyre and rim type, especially at lower frequencies.

  9. Offshore fatigue design turbulence

    NASA Astrophysics Data System (ADS)

    Larsen, Gunner C.

    2001-07-01

    Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.

  10. Experimental Validation of Thermal Retinal Models of Damage from Laser Radiation

    DTIC Science & Technology

    1979-08-01

    for measuring relative intensity profile with a thermocouple or fiber-optic sensor .............................................. 72 B-2 Calculated...relative intensity profiles meas- ured by 5- and 10-pm-radius sensors of a Gaussian beam, with standard deviation of 10 Pm...the Air Force de - veloped a model for the mathematical prediction of thermal ef- fects of laser radiation on the eye (8). Given the characteris- tics

  11. First flavor-tagged determination of bounds on mixing-induced CP violation in Bs0 --> J/psiphi decays.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Labarga, L; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2008-04-25

    This Letter describes the first determination of bounds on the CP-violation parameter 2beta(s) using B(s)(0) decays in which the flavor of the bottom meson at production is identified. The result is based on approximately 2000 B(s)(0)-->J/psiphi decays reconstructed in a 1.35 fb(-1) data sample collected with the CDF II detector using pp collisions produced at the Fermilab Tevatron. We report confidence regions in the two-dimensional space of 2beta(s) and the decay-width difference DeltaGamma. Assuming the standard model predictions of 2beta(s) and DeltaGamma, the probability of a deviation as large as the level of the observed data is 15%, corresponding to 1.5 Gaussian standard deviations.

  12. Image contrast enhancement based on a local standard deviation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-12-31

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt`s Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details aremore » concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm.« less

  13. Impact of combustion products from Space Shuttle launches on ambient air quality

    NASA Technical Reports Server (NTRS)

    Dumbauld, R. K.; Bowers, J. F.; Cramer, H. E.

    1974-01-01

    The present work describes some multilayer diffusion models and a computer program for these models developed to predict the impact of ground clouds formed during Space Shuttle launches on ambient air quality. The diffusion models are based on the Gaussian plume equation for an instantaneous volume source. Cloud growth is estimated on the basis of measurable meteorological parameters: standard deviation of the wind azimuth angle, standard deviation of wind elevation angle, vertical wind-speed shear, vertical wind-direction shear, and depth of the surface mixing layer. Calculations using these models indicate that Space Shuttle launches under a variety of meteorological regimes at Kennedy Space Center and Vandenberg AFB are unlikely to endanger the exposure standards for HCl; similar results have been obtained for CO and Al2O3. However, the possibility that precipitation scavenging of the ground cloud might result in an acidic rain that could damage vegetation has not been investigated.

  14. Effects of smoking abstinence on reaction time variability in smokers with and without ADHD: an ex-Gaussian analysis.

    PubMed

    Kollins, Scott H; McClernon, F Joseph; Epstein, Jeff N

    2009-02-01

    Smoking abstinence differentially affects cognitive functioning in smokers with ADHD, compared to non-ADHD smokers. Alternative approaches for analyzing reaction time data from these tasks may further elucidate important group differences. Adults smoking > or = 15 cigarettes with (n=12) or without (n=14) a diagnosis of ADHD completed a continuous performance task (CPT) during two sessions under two separate laboratory conditions--a 'Satiated' condition wherein participants smoked up to and during the session; and an 'Abstinent' condition, in which participants were abstinent overnight and during the session. Reaction time (RT) distributions from the CPT were modeled to fit an ex-Gaussian distribution. The indicator of central tendency for RT from the normal component of the RT distribution (mu) showed a main effect of Group (ADHD < Control) and a Group x Session interaction (ADHD group RTs decreased when abstinent). RT standard deviation for the normal component of the distribution (sigma) showed no effects. The ex-Gaussian parameter tau, which describes the mean and standard deviation of the non-normal component of the distribution, showed significant effects of session (Abstinent > Satiated), Group x Session interaction (ADHD increased significantly under Abstinent condition compared to Control), and a trend toward a main effect of Group (ADHD > Control). Alternative approaches to analyzing RT data provide a more detailed description of the effects of smoking abstinence in ADHD and non-ADHD smokers and results differ from analyses using more traditional approaches. These findings have implications for understanding the neuropsychopharmacology of nicotine and nicotine withdrawal.

  15. Effects of smoking abstinence on reaction time variability in smokers with and without ADHD: An ex-Gaussian analysis

    PubMed Central

    Kollins, Scott H.; McClernon, F. Joseph; Epstein, Jeff N.

    2009-01-01

    Smoking abstinence differentially affects cognitive functioning in smokers with ADHD, compared to non-ADHD smokers. Alternative approaches for analyzing reaction time data from these tasks may further elucidate important group differences. Adults smoking ≥15 cigarettes with (n = 12) or without (n = 14) a diagnosis of ADHD completed a continuous performance task (CPT) during two sessions under two separate laboratory conditions—a ‘Satiated’ condition wherein participants smoked up to and during the session; and an ‘Abstinent’ condition, in which participants were abstinent overnight and during the session. Reaction time (RT) distributions from the CPT were modeled to fit an ex-Gaussian distribution. The indicator of central tendency for RT from the normal component of the RT distribution (mu) showed a main effect of Group (ADHD Satiated), Group × Session interaction (ADHD increased significantly under Abstinent condition compared to Control), and a trend toward a main effect of Group (ADHD > Control). Alternative approaches to analyzing RT data provide a more detailed description of the effects of smoking abstinence in ADHD and non-ADHD smokers and results differ from analyses using more traditional approaches. These findings have implications for understanding the neuropsychopharmacology of nicotine and nicotine withdrawal. PMID:19041198

  16. Axial acoustic radiation force on a sphere in Gaussian field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Rongrong; Liu, Xiaozhou, E-mail: xzliu@nju.edu.cn; Gong, Xiufen

    2015-10-28

    Based on the finite series method, the acoustical radiation force resulting from a Gaussian beam incident on a spherical object is investigated analytically. When the position of the particles deviating from the center of the beam, the Gaussian beam is expanded as a spherical function at the center of the particles and the expanded coefficients of the Gaussian beam is calculated. The analytical expression of the acoustic radiation force on spherical particles deviating from the Gaussian beam center is deduced. The acoustic radiation force affected by the acoustic frequency and the offset distance from the Gaussian beam center is investigated.more » Results have been presented for Gaussian beams with different wavelengths and it has been shown that the interaction of a Gaussian beam with a sphere can result in attractive axial force under specific operational conditions. Results indicate the capability of manipulating and separating spherical spheres based on their mechanical and acoustical properties, the results provided here may provide a theoretical basis for development of single-beam acoustical tweezers.« less

  17. Non-gaussianity versus nonlinearity of cosmological perturbations.

    PubMed

    Verde, L

    2001-06-01

    Following the discovery of the cosmic microwave background, the hot big-bang model has become the standard cosmological model. In this theory, small primordial fluctuations are subsequently amplified by gravity to form the large-scale structure seen today. Different theories for unified models of particle physics, lead to different predictions for the statistical properties of the primordial fluctuations, that can be divided in two classes: gaussian and non-gaussian. Convincing evidence against or for gaussian initial conditions would rule out many scenarios and point us toward a physical theory for the origin of structures. The statistical distribution of cosmological perturbations, as we observe them, can deviate from the gaussian distribution in several different ways. Even if perturbations start off gaussian, nonlinear gravitational evolution can introduce non-gaussian features. Additionally, our knowledge of the Universe comes principally from the study of luminous material such as galaxies, but galaxies might not be faithful tracers of the underlying mass distribution. The relationship between fluctuations in the mass and in the galaxies distribution (bias), is often assumed to be local, but could well be nonlinear. Moreover, galaxy catalogues use the redshift as third spatial coordinate: the resulting redshift-space map of the galaxy distribution is nonlinearly distorted by peculiar velocities. Nonlinear gravitational evolution, biasing, and redshift-space distortion introduce non-gaussianity, even in an initially gaussian fluctuation field. I investigate the statistical tools that allow us, in principle, to disentangle the above different effects, and the observational datasets we require to do so in practice.

  18. Lévy-like diffusion in eye movements during spoken-language comprehension.

    PubMed

    Stephen, Damian G; Mirman, Daniel; Magnuson, James S; Dixon, James A

    2009-05-01

    This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.

  19. Lévy-like diffusion in eye movements during spoken-language comprehension

    NASA Astrophysics Data System (ADS)

    Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.

    2009-05-01

    This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.

  20. Characterizing the Spatial Density Functions of Neural Arbors

    NASA Astrophysics Data System (ADS)

    Teeter, Corinne Michelle

    Recently, it has been proposed that a universal function describes the way in which all arbors (axons and dendrites) spread their branches over space. Data from fish retinal ganglion cells as well as cortical and hippocampal arbors from mouse, rat, cat, monkey and human provide evidence that all arbor density functions (adf) can be described by a Gaussian function truncated at approximately two standard deviations. A Gaussian density function implies that there is a minimal set of parameters needed to describe an adf: two or three standard deviations (depending on the dimensionality of the arbor) and an amplitude. However, the parameters needed to completely describe an adf could be further constrained by a scaling law found between the product of the standard deviations and the amplitude of the function. In the following document, I examine the scaling law relationship in order to determine the minimal set of parameters needed to describe an adf. First, I find that the at, two-dimensional arbors of fish retinal ganglion cells require only two out of the three fundamental parameters to completely describe their density functions. Second, the three-dimensional, volume filling, cortical arbors require four fundamental parameters: three standard deviations and the total length of an arbor (which corresponds to the amplitude of the function). Next, I characterize the shape of arbors in the context of the fundamental parameters. I show that the parameter distributions of the fish retinal ganglion cells are largely homogenous. In general, axons are bigger and less dense than dendrites; however, they are similarly shaped. The parameter distributions of these two arbor types overlap and, therefore, can only be differentiated from one another probabilistically based on their adfs. Despite artifacts in the cortical arbor data, different types of arbors (apical dendrites, non-apical dendrites, and axons) can generally be differentiated based on their adfs. In addition, within arbor type, there is evidence of different neuron classes (such as interneurons and pyramidal cells). How well different types and classes of arbors can be differentiated is quantified using the Random ForestTM supervised learning algorithm.

  1. Gaussian pre-filtering for uncertainty minimization in digital image correlation using numerically-designed speckle patterns

    NASA Astrophysics Data System (ADS)

    Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo

    2015-03-01

    This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.

  2. Computational Characterization of Impact Induced Multi-Scale Dissipation in Reactive Solid Composites

    DTIC Science & Technology

    2016-07-01

    Predicted variation in (a) hot-spot number density , (b) hot-spot volume fraction, and (c) hot-spot specific surface area for each ensemble with piston speed...packing density , characterized by its effective solid volume fraction φs,0, affects hot-spot statistics for pressure dominated waves corresponding to...distribution in solid volume fraction within each ensemble was nearly Gaussian, and its standard deviation decreased with increasing density . Analysis of

  3. Evaluating the Impact of the Number of Satellite Altimeters Used in an Assimilative Ocean Prediction System

    DTIC Science & Technology

    2009-10-20

    standard deviation. The y axis indicates the scaled MB, MB95 MB 1 N N j51 (O j O)2 2 4 3 5 1/2 , (12) or the biweight version, MBbw9 5 MBbw hhO j iibw...RMSEbw9unbiased 5 RMSEbwunbiased hhO j iibw . (15) To investigate the impact of outliers, results from both the Gaussian statistics [Eqs. (12) and

  4. Nonstationary stochastic charge fluctuations of a dust particle in plasmas.

    PubMed

    Shotorban, B

    2011-06-01

    Stochastic charge fluctuations of a dust particle that are due to discreteness of electrons and ions in plasmas can be described by a one-step process master equation [T. Matsoukas and M. Russell, J. Appl. Phys. 77, 4285 (1995)] with no exact solution. In the present work, using the system size expansion method of Van Kampen along with the linear noise approximation, a Fokker-Planck equation with an exact Gaussian solution is developed by expanding the master equation. The Gaussian solution has time-dependent mean and variance governed by two ordinary differential equations modeling the nonstationary process of dust particle charging. The model is tested via the comparison of its results to the results obtained by solving the master equation numerically. The electron and ion currents are calculated through the orbital motion limited theory. At various times of the nonstationary process of charging, the model results are in a very good agreement with the master equation results. The deviation is more significant when the standard deviation of the charge is comparable to the mean charge in magnitude.

  5. Finding SDSS Galaxy Clusters in 4-dimensional Color Space Using the False Discovery Rate

    NASA Astrophysics Data System (ADS)

    Nichol, R. C.; Miller, C. J.; Reichart, D.; Wasserman, L.; Genovese, C.; SDSS Collaboration

    2000-12-01

    We describe a recently developed statistical technique that provides a meaningful cut-off in probability-based decision making. We are concerned with multiple testing, where each test produces a well-defined probability (or p-value). By well-known, we mean that the null hypothesis used to determine the p-value is fully understood and appropriate. The method is entitled False Discovery Rate (FDR) and its largest advantage over other measures is that it allows one to specify a maximal amount of acceptable error. As an example of this tool, we apply FDR to a four-dimensional clustering algorithm using SDSS data. For each galaxy (or test galaxy), we count the number of neighbors that fit within one standard deviation of a four dimensional Gaussian centered on that test galaxy. The mean and standard deviation of that Gaussian are determined from the colors and errors of the test galaxy. We then take that same Gaussian and place it on a random selection of n galaxies and make a similar count. In the limit of large n, we expect the median count around these random galaxies to represent a typical field galaxy. For every test galaxy we determine the probability (or p-value) that it is a field galaxy based on these counts. A low p-value implies that the test galaxy is in a cluster environment. Once we have a p-value for every galaxy, we use FDR to determine at what level we should make our probability cut-off. Once this cut-off is made, we have a final sample of galaxies that are cluster-like galaxies. Using FDR, we also know the maximum amount of field contamination in our cluster galaxy sample. We present our preliminary galaxy clustering results using these methods.

  6. Common inputs in subthreshold membrane potential: The role of quiescent states in neuronal activity

    NASA Astrophysics Data System (ADS)

    Montangie, Lisandro; Montani, Fernando

    2018-06-01

    Experiments in certain regions of the cerebral cortex suggest that the spiking activity of neuronal populations is regulated by common non-Gaussian inputs across neurons. We model these deviations from random-walk processes with q -Gaussian distributions into simple threshold neurons, and investigate the scaling properties in large neural populations. We show that deviations from the Gaussian statistics provide a natural framework to regulate population statistics such as sparsity, entropy, and specific heat. This type of description allows us to provide an adequate strategy to explain the information encoding in the case of low neuronal activity and its possible implications on information transmission.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, D; Meier, J; Mawlawi, O

    Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less

  8. Analytical performance specifications for changes in assay bias (Δbias) for data with logarithmic distributions as assessed by effects on reference change values.

    PubMed

    Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.

  9. Color image enhancement based on particle swarm optimization with Gaussian mixture

    NASA Astrophysics Data System (ADS)

    Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho

    2015-01-01

    This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.

  10. Multi-fidelity Gaussian process regression for prediction of random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less

  11. Minding Impacting Events in a Model of Stochastic Variance

    PubMed Central

    Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.

    2011-01-01

    We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864

  12. Observation and mass measurement of the baryon Xib-.

    PubMed

    Aaltonen, T; Abulencia, A; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carrillo, S; Carlsmith, D; Carosi, R; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Cilijak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; DaRonco, S; Datta, M; D'Auria, S; Davies, T; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Delli Paoli, F; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Dörr, C; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garberson, F; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D; Giagu, S; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Group, R C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraan, A C; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marginean, R; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyamoto, A; Moed, S; Moggi, N; Mohr, B; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuno, S; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vazquez, F; Velev, G; Vellidis, C; Veramendi, G; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Vollrath, I; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, J; Wagner, W; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zhou, J; Zucchelli, S

    2007-08-03

    We report the observation and measurement of the mass of the bottom, strange baryon Xi(b)- through the decay chain Xi(b)- -->J/psiXi-, where J/psi-->mu+mu-, Xi- -->Lambdapi-, and Lambda-->ppi-. A signal is observed whose probability of arising from a background fluctuation is 6.6 x 10(-15), or 7.7 Gaussian standard deviations. The Xi(b)- mass is measured to be 5792.9+/-2.5(stat) +/- 1.7(syst) MeV/c2.

  13. Modeling of skin cancer dermatoscopy images

    NASA Astrophysics Data System (ADS)

    Iralieva, Malica B.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.

    2018-04-01

    An early identified cancer is more likely to effective respond to treatment and has a less expensive treatment as well. Dermatoscopy is one of general diagnostic techniques for skin cancer early detection that allows us in vivo evaluation of colors and microstructures on skin lesions. Digital phantoms with known properties are required during new instrument developing to compare sample's features with data from the instrument. An algorithm for image modeling of skin cancer is proposed in the paper. Steps of the algorithm include setting shape, texture generation, adding texture and normal skin background setting. The Gaussian represents the shape, and then the texture generation based on a fractal noise algorithm is responsible for spatial chromophores distributions, while the colormap applied to the values corresponds to spectral properties. Finally, a normal skin image simulated by mixed Monte Carlo method using a special online tool is added as a background. Varying of Asymmetry, Borders, Colors and Diameter settings is shown to be fully matched to the ABCD clinical recognition algorithm. The asymmetry is specified by setting different standard deviation values of Gaussian in different parts of image. The noise amplitude is increased to set the irregular borders score. Standard deviation is changed to determine size of the lesion. Colors are set by colormap changing. The algorithm for simulating different structural elements is required to match with others recognition algorithms.

  14. Analyses and assessments of span wise gust gradient data from NASA B-57B aircraft

    NASA Technical Reports Server (NTRS)

    Frost, Walter; Chang, Ho-Pen; Ringnes, Erik A.

    1987-01-01

    Analysis of turbulence measured across the airfoil of a Cambera B-57 aircraft is reported. The aircraft is instrumented with probes for measuring wind at both wing tips and at the nose. Statistical properties of the turbulence are reported. These consist of the standard deviations of turbulence measured by each individual probe, standard deviations and probability distribution of differences in turbulence measured between probes and auto- and two-point spatial correlations and spectra. Procedures associated with calculations of two-point spatial correlations and spectra utilizing data were addressed. Methods and correction procedures for assuring the accuracy of aircraft measured winds are also described. Results are found, in general, to agree with correlations existing in the literature. The velocity spatial differences fit a Gaussian/Bessel type probability distribution. The turbulence agrees with the von Karman turbulence correlation and with two-point spatial correlations developed from the von Karman correlation.

  15. Laser transit anemometer software development program

    NASA Technical Reports Server (NTRS)

    Abbiss, John B.

    1989-01-01

    Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.

  16. Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.

    PubMed

    Bouhrara, Mustapha; Spencer, Richard G

    2018-06-01

    The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Tests for qualitative treatment-by-centre interaction using a 'pushback' procedure.

    PubMed

    Ciminera, J L; Heyse, J F; Nguyen, H H; Tukey, J W

    1993-06-15

    In multicentre clinical trials using a common protocol, the centres are usually regarded as being a fixed factor, thus allowing any treatment-by-centre interaction to be omitted from the error term for the effect of treatment. However, we feel it necessary to use the treatment-by-centre interaction as the error term if there is substantial evidence that the interaction with centres is qualitative instead of quantitative. To make allowance for the estimated uncertainties of the centre means, we propose choosing a reference value (for example, the median of the ordered array of centre means) and converting the individual centre results into standardized deviations from the reference value. The deviations are then reordered, and the results 'pushed back' by amounts appropriate for the corresponding order statistics in a sample from the relevant distribution. The pushed-back standardized deviations are then restored to the original scale. The appearance of opposite signs among the destandardized values for the various centres is then taken as 'substantial evidence' of qualitative interaction. Procedures are presented using, in any combination: (i) Gaussian, or Student's t-distribution; (ii) order-statistic medians or outward 90 per cent points of the corresponding order statistic distributions; (iii) pooling or grouping and pooling the internally estimated standard deviations of the centre means. The use of the least conservative combination--Student's t, outward 90 per cent points, grouping and pooling--is recommended.

  18. The Barberplaid Illusion

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Mulligan, J. B.; Stone, L. S.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Mulligan showed that the perceived direction of a moving grating can be biased by the shape of the Gaussian window in which it is viewed. We sought to determine if a 2-D pattern with an unambiguous velocity would also show such biases. Observers viewed a drifting plaid (sum of two orthogonal 2.5 c/d sinusoidal gratings of 12% contrast, each with a TF of 4 Hz.) whose contrast was modulated spatially by a stationary, asymmetric 2-D Gaussian window (i.e. unequal standard deviations in the principal directions). The direction of plaid motion with respect to the orientation of the window's major axis (Delta Theta) was varied while all other motion parameters were held fixed. Observers reported the perceived plaid direction of motion by adjusting the orientation of a pointer. All five observers showed systematic biases in perceived plaid direction that depended on Delta Theta and the aspect ratio of the Gaussian window (lambda). For circular Gaussian windows Lambda = 1), plaid direction was veridically perceived. However, biases of up to 10 deg. were found for lambda = 2 and Delta Theta = 30 deg. These data present a challenge to models of motion perception which do not explicitly consider the integration of information across the visual field.

  19. MO-F-CAMPUS-T-03: Data Driven Approaches for Determination of Treatment Table Tolerance Values for Record and Verification Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, N; DiCostanzo, D; Fullenkamp, M

    2015-06-15

    Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less

  20. A new method for the identification of non-Gaussian line profiles in elliptical galaxies

    NASA Technical Reports Server (NTRS)

    Van Der Marel, Roeland P.; Franx, Marijn

    1993-01-01

    A new parameterization for the line profiles of elliptical galaxies, the Gauss-Hermite series, is proposed. This approach expands the line profile as a sum of orthogonal functions which minimizes the correlations between the errors in the parameters of the fit. This method also make use of the fact that Gaussians provide good low-order fits to observed line profiles. The method yields measurements of the line strength, mean radial velocity, and the velocity dispersion as well as two extra parameters, h3 and h4, that measure asymmetric and symmetric deviations of the line profiles from a Gaussian, respectively. The new method was used to derive profiles for three elliptical galaxies which all have asymmetric line profiles on the major axis with symmetric deviations from a Gaussian. Results confirm that elliptical galaxies have complex structures due to their complex formation history.

  1. Normative Bilateral Brainstem Evoked Response Data for a Naval Aviation Student Population: Group Statistics

    DTIC Science & Technology

    1979-08-07

    laus levels of the present study all fall within the plus and sinus one -standard deviation boundar; limits of the composite laboratory data plotted by...to be the case in the present study in that the =pz!Aude of the contralateral response prtduced by a given stimulus level follcuzd, in general, that...equivalent Gaussian distribution was applied to Cia study data. Such an analysis, performed by Thornton (36) on the latcncy and amplitude measurements

  2. Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.

    PubMed

    Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J

    2016-01-01

    We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  4. Analysis of piezoelectric energy harvester under modulated and filtered white Gaussian noise

    NASA Astrophysics Data System (ADS)

    Quaranta, Giuseppe; Trentadue, Francesco; Maruccio, Claudio; Marano, Giuseppe C.

    2018-05-01

    This paper proposes a comprehensive method for the electromechanical probabilistic analysis of piezoelectric energy harvesters subjected to modulated and filtered white Gaussian noise (WGN) at the base. Specifically, the dynamic excitation is simulated by means of an amplitude-modulated WGN, which is filtered through the Clough-Penzien filter. The considered piezoelectric harvester is a cantilever bimorph modeled as Euler-Bernoulli beam with a concentrated mass at the free-end, and its global behavior is approximated by the fundamental vibration mode (which is tuned with the dominant frequency of the dynamic input). A resistive electrical load is considered in the circuit. Once the Lyapunov equation of the coupled electromechanical problem has been formulated, an original and efficient semi-analytical procedure is proposed to estimate mean and standard deviation of the electrical energy extracted from the piezoelectric layers.

  5. Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile

    NASA Astrophysics Data System (ADS)

    Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.

    2012-09-01

    Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.

  6. Confronting Passive and Active Sensors with Non-Gaussian Statistics

    PubMed Central

    Rodríguez-Gonzálvez, Pablo.; Garcia-Gago, Jesús.; Gomez-Lahoz, Javier.; González-Aguilera, Diego.

    2014-01-01

    This paper has two motivations: firstly, to compare the Digital Surface Models (DSM) derived by passive (digital camera) and by active (terrestrial laser scanner) remote sensing systems when applied to specific architectural objects, and secondly, to test how well the Gaussian classic statistics, with its Least Squares principle, adapts to data sets where asymmetrical gross errors may appear and whether this approach should be changed for a non-parametric one. The field of geomatic technology automation is immersed in a high demanding competition in which any innovation by one of the contenders immediately challenges the opponents to propose a better improvement. Nowadays, we seem to be witnessing an improvement of terrestrial photogrammetry and its integration with computer vision to overcome the performance limitations of laser scanning methods. Through this contribution some of the issues of this “technological race” are examined from the point of view of photogrammetry. A new software is introduced and an experimental test is designed, performed and assessed to try to cast some light on this thrilling match. For the case considered in this study, the results show good agreement between both sensors, despite considerable asymmetry. This asymmetry suggests that the standard Normal parameters are not adequate to assess this type of data, especially when accuracy is of importance. In this case, standard deviation fails to provide a good estimation of the results, whereas the results obtained for the Median Absolute Deviation and for the Biweight Midvariance are more appropriate measures. PMID:25196104

  7. Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images

    NASA Astrophysics Data System (ADS)

    Kamble, V. M.; Bhurchandi, K.

    2018-03-01

    Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.

  8. Propagation of rotational Risley-prism-array-based Gaussian beams in turbulent atmosphere

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Ma, Haotong; Dong, Li; Ren, Ge; Qi, Bo; Tan, Yufeng

    2018-03-01

    Limited by the size and weight of prism and optical assembling, Rotational Risley-prism-array system is a simple but effective way to realize high power and superior beam quality of deflecting laser output. In this paper, the propagation of the rotational Risley-prism-array-based Gaussian beam array in atmospheric turbulence is studied in detail. An analytical expression for the average intensity distribution at the receiving plane is derived based on nonparaxial ray tracing method and extended Huygens-Fresnel principle. Power in the diffraction-limited bucket is chosen to evaluate beam quality. The effect of deviation angle, propagation distance and intensity of turbulence on beam quality is studied in detail by quantitative simulation. It reveals that with the propagation distance increasing, the intensity distribution gradually evolves from multiple-petal-like shape into the pattern that contains one main-lobe in the center with multiple side-lobes in weak turbulence. The beam quality of rotational Risley-prism-array-based Gaussian beam array with lower deviation angle is better than its counterpart with higher deviation angle when propagating in weak and medium turbulent (i.e. Cn2 < 10-13m-2/3), the beam quality of higher deviation angle arrays degrades faster as the intensity of turbulence gets stronger. In the case of propagating in strong turbulence, the long propagation distance (i.e. z > 10km ) and deviation angle have no influence on beam quality.

  9. Spectral combination of spherical gravitational curvature boundary-value problems

    NASA Astrophysics Data System (ADS)

    PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel

    2018-04-01

    Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.

  10. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  11. Crossing statistics of laser light scattered through a nanofluid.

    PubMed

    Arshadi Pirlar, M; Movahed, S M S; Razzaghi, D; Karimzadeh, R

    2017-09-01

    In this paper, we investigate the crossing statistics of speckle patterns formed in the Fresnel diffraction region by a laser beam scattering through a nanofluid. We extend zero-crossing statistics to assess the dynamical properties of the nanofluid. According to the joint probability density function of laser beam fluctuation and its time derivative, the theoretical frameworks for Gaussian and non-Gaussian regimes are revisited. We count the number of crossings not only at zero level but also for all available thresholds to determine the average speed of moving particles. Using a probabilistic framework in determining crossing statistics, a priori Gaussianity is not essentially considered; therefore, even in the presence of deviation from Gaussian fluctuation, this modified approach is capable of computing relevant quantities, such as mean value of speed, more precisely. Generalized total crossing, which represents the weighted summation of crossings for all thresholds to quantify small deviation from Gaussian statistics, is introduced. This criterion can also manipulate the contribution of noises and trends to infer reliable physical quantities. The characteristic time scale for having successive crossings at a given threshold is defined. In our experimental setup, we find that increasing sample temperature leads to more consistency between Gaussian and perturbative non-Gaussian predictions. The maximum number of crossings does not necessarily occur at mean level, indicating that we should take into account other levels in addition to zero level to achieve more accurate assessments.

  12. Fatigue assessment of vibrating rail vehicle bogie components under non-Gaussian random excitations using power spectral densities

    NASA Astrophysics Data System (ADS)

    Wolfsteiner, Peter; Breuer, Werner

    2013-10-01

    The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.

  13. The statistical treatment implemented to obtain the planetary protection bioburdens for the Mars Science Laboratory mission

    NASA Astrophysics Data System (ADS)

    Beaudet, Robert A.

    2013-06-01

    NASA Planetary Protection Policy requires that Category IV missions such as those going to the surface of Mars include detailed assessment and documentation of the bioburden on the spacecraft at launch. In the prior missions to Mars, the approaches used to estimate the bioburden could easily be conservative without penalizing the project because spacecraft elements such as the descent and landing stages had relatively small surface areas and volumes. With the advent of a large spacecraft such as Mars Science Laboratory (MSL), it became necessary for a modified—still conservative but more pragmatic—statistical treatment be used to obtain the standard deviations and the bioburden densities at about the 99.9% confidence limits. This article describes both the Gaussian and Poisson statistics that were implemented to analyze the bioburden data from the MSL spacecraft prior to launch. The standard deviations were weighted by the areas sampled with each swab or wipe. Some typical cases are given and discussed.

  14. Effect of central obscuration on the LDR point spread function

    NASA Technical Reports Server (NTRS)

    Vanzyl, Jakob J.

    1988-01-01

    It is well known that Gaussian apodization of an aperture reduces the sidelobe levels of its point spread function (PSF). In the limit where the standard deviation of the Gaussian function is much smaller than the diameter of the aperture, the sidelobes completely disappear. However, when Gaussian apodization is applied to the Large Deployable Reflector (LDR) array consisting of 84 hexagonal panels, it is found that the sidelobe level only decreases by about 2.5 dB. The reason for this is explained. The PSF is shown for an array consisting of 91 uniformly illuminated hexagonal apertures; this array is identical to the LDR array, except that the central hole in the LDR array is filled with seven additional panels. For comparison, the PSF of the uniformly illuminated LDR array is shown. Notice that it is already evident that the sidelobe structure of the LDR array is different from that of the full array of 91 panels. The PSF's of the same two arrays are shown, but with the illumination apodized with a Gaussian function to have 20 dB tapering at the edges of the arrays. While the sidelobes of the full array have decreased dramatically, those of the LDR array changed in structure, but stayed at almost the same level. This result is not completely surprising, since the Gaussian apodization tends to emphasize the contributions from the central portion of the array; exactly where the hole in the LDR array is located. The two most important conclusions are: the size of the central hole should be minimized, and a simple Gaussian apodization scheme to suppress the sidelobes in the PSF should not be used. A more suitable apodization scheme would be a Gaussian annular ring.

  15. Asymptotics of small deviations of the Bogoliubov processes with respect to a quadratic norm

    NASA Astrophysics Data System (ADS)

    Pusev, R. S.

    2010-10-01

    We obtain results on small deviations of Bogoliubov’s Gaussian measure occurring in the theory of the statistical equilibrium of quantum systems. For some random processes related to Bogoliubov processes, we find the exact asymptotic probability of their small deviations with respect to a Hilbert norm.

  16. Accelerator test of the coded aperture mask technique for gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.

    1982-01-01

    A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.

  17. Simulated laser fluorosensor signals from subsurface chlorophyll distributions

    NASA Technical Reports Server (NTRS)

    Venable, D. D.; Khatun, S.; Punjabi, A.; Poole, L.

    1986-01-01

    A semianalytic Monte Carlo model has been used to simulate laser fluorosensor signals returned from subsurface distributions of chlorophyll. This study assumes the only constituent of the ocean medium is the common coastal zone dinoflagellate Prorocentrum minimum. The concentration is represented by Gaussian distributions in which the location of the distribution maximum and the standard deviation are variable. Most of the qualitative features observed in the fluorescence signal for total chlorophyll concentrations up to 1.0 microg/liter can be accounted for with a simple analytic solution assuming a rectangular chlorophyll distribution function.

  18. Hybrid approach of selecting hyperparameters of support vector machine for regression.

    PubMed

    Jeng, Jin-Tsong

    2006-06-01

    To select the hyperparameters of the support vector machine for regression (SVR), a hybrid approach is proposed to determine the kernel parameter of the Gaussian kernel function and the epsilon value of Vapnik's epsilon-insensitive loss function. The proposed hybrid approach includes a competitive agglomeration (CA) clustering algorithm and a repeated SVR (RSVR) approach. Since the CA clustering algorithm is used to find the nearly "optimal" number of clusters and the centers of clusters in the clustering process, the CA clustering algorithm is applied to select the Gaussian kernel parameter. Additionally, an RSVR approach that relies on the standard deviation of a training error is proposed to obtain an epsilon in the loss function. Finally, two functions, one real data set (i.e., a time series of quarterly unemployment rate for West Germany) and an identification of nonlinear plant are used to verify the usefulness of the hybrid approach.

  19. Reconstructing the interaction between dark energy and dark matter using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Guo, Zong-Kuan; Cai, Rong-Gen

    2015-06-01

    We present a nonparametric approach to reconstruct the interaction between dark energy and dark matter directly from SNIa Union 2.1 data using Gaussian processes, which is a fully Bayesian approach for smoothing data. In this method, once the equation of state (w ) of dark energy is specified, the interaction can be reconstructed as a function of redshift. For the decaying vacuum energy case with w =-1 , the reconstructed interaction is consistent with the standard Λ CDM model, namely, there is no evidence for the interaction. This also holds for the constant w cases from -0.9 to -1.1 and for the Chevallier-Polarski-Linder (CPL) parametrization case. If the equation of state deviates obviously from -1 , the reconstructed interaction exists at 95% confidence level. This shows the degeneracy between the interaction and the equation of state of dark energy when they get constraints from the observational data.

  20. Interstellar Pickup Ion Acceleration in the Turbulent Magnetic Field at the Solar Wind Termination Shock Using a Focused Transport Approach

    NASA Astrophysics Data System (ADS)

    Ye, Junye; le Roux, Jakobus A.; Arthur, Aaron D.

    2016-08-01

    We study the physics of locally born interstellar pickup proton acceleration at the nearly perpendicular solar wind termination shock (SWTS) in the presence of a random magnetic field spiral angle using a focused transport model. Guided by Voyager 2 observations, the spiral angle is modeled with a q-Gaussian distribution. The spiral angle fluctuations, which are used to generate the perpendicular diffusion of pickup protons across the SWTS, play a key role in enabling efficient injection and rapid diffusive shock acceleration (DSA) when these particles follow field lines. Our simulations suggest that variation of both the shape (q-value) and the standard deviation (σ-value) of the q-Gaussian distribution significantly affect the injection speed, pitch-angle anisotropy, radial distribution, and the efficiency of the DSA of pickup protons at the SWTS. For example, increasing q and especially reducing σ enhances the DSA rate.

  1. Probing the statistical properties of CMB B-mode polarization through Minkowski functionals

    NASA Astrophysics Data System (ADS)

    Santos, Larissa; Wang, Kai; Zhao, Wen

    2016-07-01

    The detection of the magnetic type B-mode polarization is the main goal of future cosmic microwave background (CMB) experiments. In the standard model, the B-mode map is a strong non-gaussian field due to the CMB lensing component. Besides the two-point correlation function, the other statistics are also very important to dig the information of the polarization map. In this paper, we employ the Minkowski functionals to study the morphological properties of the lensed B-mode maps. We find that the deviations from Gaussianity are very significant for both full and partial-sky surveys. As an application of the analysis, we investigate the morphological imprints of the foreground residuals in the B-mode map. We find that even for very tiny foreground residuals, the effects on the map can be detected by the Minkowski functional analysis. Therefore, it provides a complementary way to investigate the foreground contaminations in the CMB studies.

  2. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    PubMed Central

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871

  3. Sunspot cycle-dependent changes in the distribution of GSE latitudinal angles of IMF observed near 1 AU

    NASA Astrophysics Data System (ADS)

    Felix Pereira, B.; Girish, T. E.

    2004-05-01

    The solar cycle variations in the characteristics of the GSE latitudinal angles of the Interplanetary Magnetic Field ($\\theta$GSE) observed near 1 AU have been studied for the period 1967-2000. It is observed that the statistical parameters mean, standard deviation, skewness and kurtosis vary with sunspot cycle. The $\\theta$GSE distribution resembles the Gaussian curve during sunspot maximum and is clearly non-Gaussian during sunspot minimum. The width of the $\\theta$GSE distribution is found to increase with sunspot activity, which is likely to depend on the occurrence of solar transients. Solar cycle variations in skewness are ordered by the solar polar magnetic field changes. This can be explained in terms of the dependence of the dominant polarity of the north-south component of IMF in the GSE system near 1 AU on the IMF sector polarity and the structure of the heliospheric current sheet.

  4. Optimal random search for a single hidden target.

    PubMed

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  5. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    PubMed

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  6. Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger

    2018-05-01

    In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.

  7. Temperature dependence of current-and capacitance-voltage characteristics of an Au/4H-SiC Schottky diode

    NASA Astrophysics Data System (ADS)

    Gülnahar, Murat

    2014-12-01

    In this study, the current-voltage (I-V) and capacitance-voltage (C-V) measurements of an Au/4H-SiC Schottky diode are characterized as a function of the temperature in 50-300 K temperature range. The experimental parameters such as ideality factor and apparent barrier height presents to be strongly temperature dependent, that is, the ideality factor increases and the apparent barrier height decreases with decreasing temperature, whereas the barrier height values increase with the temperature for C-V data. Likewise, the Richardson plot deviates at low temperatures. These anomaly behaviors observed for Au/4H-SiC are attributed to Schottky barrier inhomogeneities. The barrier anomaly which relates to interface of Au/4H-SiC is also confirmed by the C-V measurements versus the frequency measured in 300 K and it is interpreted by both Tung's lateral inhomogeneity model and multi-Gaussian distribution approach. The values of the weighting coefficients, standard deviations and mean barrier height are calculated for each distribution region of Au/4H-SiC using the multi-Gaussian distribution approach. In addition, the total effective area of the patches NAe is obtained at separate temperatures and as a result, it is expressed that the low barrier regions influence meaningfully to the current transport at the junction. The homogeneous barrier height value is calculated from the correlation between the ideality factor and barrier height and it is noted that the values of standard deviation from ideality factor versus q/3kT curve are in close agreement with the values obtained from the barrier height versus q/2kT variation. As a result, it can be concluded that the temperature dependent electrical characteristics of Au/4H-SiC can be successfully commented on the basis of the thermionic emission theory with both models.

  8. Gaussian mixture models for detection of autism spectrum disorders (ASD) in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Almeida, Javier; Velasco, Nelson; Alvarez, Charlens; Romero, Eduardo

    2017-11-01

    Autism Spectrum Disorder (ASD) is a complex neurological condition characterized by a triad of signs: stereotyped behaviors, verbal and non-verbal communication problems. The scientific community has been interested on quantifying anatomical brain alterations of this disorder. Several studies have focused on measuring brain cortical and sub-cortical volumes. This article presents a fully automatic method which finds out differences among patients diagnosed with autism and control patients. After the usual pre-processing, a template (MNI152) is registered to an evaluated brain which becomes then a set of regions. Each of these regions is the represented by the normalized histogram of intensities which is approximated by mixture of Gaussian (GMM). The gray and white matter are separated to calculate the mean and standard deviation of each Gaussian. These features are then used to train, region per region, a binary SVM classifier. The method was evaluated in an adult population aged from 18 to 35 years, from the public database Autism Brain Imaging Data Exchange (ABIDE). Highest discrimination values were found for the Right Middle Temporal Gyrus, with an Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) the curve of 0.72.

  9. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  10. Intra-individual reaction time variability based on ex-Gaussian distribution as a potential endophenotype for attention-deficit/hyperactivity disorder.

    PubMed

    Lin, H-Y; Hwang-Gu, S-L; Gau, S S-F

    2015-07-01

    Intra-individual variability in reaction time (IIV-RT), defined by standard deviation of RT (RTSD), is considered as an endophenotype for attention-deficit/hyperactivity disorder (ADHD). Ex-Gaussian distributions of RT, rather than RTSD, could better characterize moment-to-moment fluctuations in neuropsychological performance. However, data of response variability based on ex-Gaussian parameters as an endophenotypic candidate for ADHD are lacking. We assessed 411 adolescents with clinically diagnosed ADHD based on the DSM-IV-TR criteria as probands, 138 unaffected siblings, and 138 healthy controls. The output parameters, mu, sigma, and tau, of an ex-Gaussian RT distribution were derived from the Conners' continuous performance test. Multi-level models controlling for sex, age, comorbidity, and use of methylphenidate were applied. Compared with unaffected siblings and controls, ADHD probands had elevated sigma value, omissions, commissions, and mean RT. Unaffected siblings formed an intermediate group in-between probands and controls in terms of tau value and RTSD. There was no between-group difference in mu value. Conforming to a context-dependent nature, unaffected siblings still had an intermediate tau value in-between probands and controls across different interstimulus intervals. Our findings suggest IIV-RT represented by tau may be a potential endophenotype for inquiry into genetic underpinnings of ADHD in the context of heterogeneity. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Novel theory for propagation of tilted Gaussian beam through aligned optical system

    NASA Astrophysics Data System (ADS)

    Xia, Lei; Gao, Yunguo; Han, Xudong

    2017-03-01

    A novel theory for tilted beam propagation is established in this paper. By setting the propagation direction of the tilted beam as the new optical axis, we establish a virtual optical system that is aligned with the new optical axis. Within the first order approximation of the tilt and off-axis, the propagation of the tilted beam is studied in the virtual system instead of the actual system. To achieve more accurate optical field distributions of tilted Gaussian beams, a complete diffraction integral for a misaligned optical system is derived by using the matrix theory with angular momentums. The theory demonstrates that a tilted TEM00 Gaussian beam passing through an aligned optical element transforms into a decentered Gaussian beam along the propagation direction. The deviations between the peak intensity axis of the decentered Gaussian beam and the new optical axis have linear relationships with the misalignments in the virtual system. ZEMAX simulation of a tilted beam through a thick lens exposed to air shows that the errors between the simulation results and theoretical calculations of the position deviations are less than 2‰ when the misalignments εx, εy, εx', εy' are in the range of [-0.5, 0.5] mm and [-0.5, 0.5]°.

  12. Application of the thermorheologically complex nonlinear Adam-Gibbs model for the glass transition to molecular motion in hydrated proteins.

    PubMed

    Hodge, Ian M

    2006-08-01

    The nonlinear thermorheologically complex Adam Gibbs (extended "Scherer-Hodge") model for the glass transition is applied to enthalpy relaxation data reported by Sartor, Mayer, and Johari for hydrated methemoglobin. A sensible range in values for the average localized activation energy is obtained (100-200 kJ mol(-1)). The standard deviation in the inferred Gaussian distribution of activation energies, computed from the reported KWW beta-parameter, is approximately 30% of the average, consistent with the suggestion that some relaxation processes in hydrated proteins have exceptionally low activation energies.

  13. Hunting high and low: disentangling primordial and late-time non-Gaussianity with cosmic densities in spheres

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.

    2018-03-01

    Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.

  14. Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals

    NASA Astrophysics Data System (ADS)

    Buchert, Thomas; France, Martin J.; Steiner, Frank

    2017-05-01

    Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k  =  P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.

  15. Investigating the TeV Morphology of MGRO J1908+06 with VERITAS

    NASA Astrophysics Data System (ADS)

    Aliu, E.; Archambault, S.; Aune, T.; Behera, B.; Beilicke, M.; Benbow, W.; Berger, K.; Bird, R.; Buckley, J. H.; Bugaev, V.; Cardenzana, J. V.; Cerruti, M.; Chen, X.; Ciupik, L.; Collins-Hughes, E.; Connolly, M. P.; Cui, W.; Dumm, J.; Dwarkadas, V. V.; Errando, M.; Falcone, A.; Federici, S.; Feng, Q.; Finley, J. P.; Fleischhack, H.; Fortin, P.; Fortson, L.; Furniss, A.; Galante, N.; Gall, D.; Gillanders, G. H.; Griffin, S.; Griffiths, S. T.; Grube, J.; Gyuk, G.; Hanna, D.; Holder, J.; Hughes, G.; Humensky, T. B.; Kaaret, P.; Kertzman, M.; Khassen, Y.; Kieda, D.; Krennrich, F.; Kumar, S.; Lang, M. J.; Madhavan, A. S.; Maier, G.; McCann, A. J.; Meagher, K.; Millis, J.; Moriarty, P.; Mukherjee, R.; Nieto, D.; O'Faoláin de Bhróithe, A.; Ong, R. A.; Otte, A. N.; Pandel, D.; Park, N.; Pohl, M.; Popkow, A.; Prokoph, H.; Quinn, J.; Ragan, K.; Rajotte, J.; Ratliff, G.; Reyes, L. C.; Reynolds, P. T.; Richards, G. T.; Roache, E.; Rousselle, J.; Sembroski, G. H.; Shahinyan, K.; Sheidaei, F.; Smith, A. W.; Staszak, D.; Telezhinsky, I.; Tsurusaki, K.; Tucci, J. V.; Tyler, J.; Varlotta, A.; Vassiliev, V. V.; Vincent, S.; Wakely, S. P.; Ward, J. E.; Weinstein, A.; Welsing, R.; Wilhelm, A.

    2014-06-01

    We report on deep observations of the extended TeV gamma-ray source MGRO J1908+06 made with the VERITAS very high energy gamma-ray observatory. Previously, the TeV emission has been attributed to the pulsar wind nebula (PWN) of the Fermi-LAT pulsar PSR J1907+0602. We detect MGRO J1908+06 at a significance level of 14 standard deviations (14σ) and measure a photon index of 2.20 ± 0.10stat ± 0.20sys. The TeV emission is extended, covering the region near PSR J1907+0602 and also extending toward SNR G40.5-0.5. When fitted with a two-dimensional Gaussian, the intrinsic extension has a standard deviation of σsrc = 0.°44 ± 0.°02. In contrast to other TeV PWNe of similar age in which the TeV spectrum softens with distance from the pulsar, the TeV spectrum measured near the pulsar location is consistent with that measured at a position near the rim of G40.5-0.5, 0.°33 away.

  16. Investigations of internal noise levels for different target sizes, contrasts, and noise structures

    NASA Astrophysics Data System (ADS)

    Han, Minah; Choi, Shinkook; Baek, Jongduk

    2014-03-01

    To describe internal noise levels for different target sizes, contrasts, and noise structures, Gaussian targets with four different sizes (i.e., standard deviation of 2,4,6 and 8) and three different noise structures(i.e., white, low-pass, and highpass) were generated. The generated noise images were scaled to have standard deviation of 0.15. For each noise type, target contrasts were adjusted to have the same detectability based on NPW, and the detectability of CHO was calculated accordingly. For human observer study, 3 trained observers performed 2AFC detection tasks, and correction rate, Pc, was calculated for each task. By adding proper internal noise level to numerical observer (i.e., NPW and CHO), detectability of human observer was matched with that of numerical observers. Even though target contrasts were adjusted to have the same detectability of NPW observer, detectability of human observer decreases as the target size increases. The internal noise level varies for different target sizes, contrasts, and noise structures, demonstrating different internal noise levels should be considered in numerical observer to predict the detection performance of human observer.

  17. UV-light-assisted functionalization for sensing of light molecules

    NASA Astrophysics Data System (ADS)

    Funari, Riccardo; Della Ventura, Bartolomeo; Ambrosio, Antonio; Lettieri, Stefano; Maddalena, Pasqualino; Altucci, Carlo; Velotta, Raffaele

    2013-05-01

    An antibody immobilization technique based on the formation of thiol groups after UV irradiation of the proteins is shown to be able to orient upside antibodies on a gold electrode of a Quartz Crystal Microbalance (QCM). This greatly affects the aptitude of antibodies in recognizing small antigens thereby increasing the sensitivity of the QCM. The capability of such a procedure to orient antibodies is confirmed by the Atomic Force Microscopy (AFM) of the surface that shows different statistical distributions for the height of the detected peaks, whether the irradiation is performed or not. In particular, the distributions are Gaussian with a standard deviation smaller when irradiated antibodies are used compared to that obtained with no treated antibodies. The standard deviation reduction is explained in terms of higher order induced on the host surface resulting from the trend of irradiated antibodies to be anchored upside on the surface with their antigen binding sites free to catch recognized analytes. As a result the sensitivity of the realized biosensor is increased by even more than one order of magnitude.

  18. Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation

    NASA Technical Reports Server (NTRS)

    Steinwolf, Alexander; Rizzi, Stephen A.

    2003-01-01

    The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.

  19. Electrokinetic transport properties of deoxynucleotide monophosphates (dNMPs) through thermoplastic nanochannels.

    PubMed

    O'Neil, Colleen; Amarasekara, Charuni A; Weerakoon-Ratnayake, Kumuditha M; Gross, Bethany; Jia, Zheng; Singh, Varshni; Park, Sunggook; Soper, Steven A

    2018-10-16

    The electrokinetic behavior of molecules in nanochannels (<100 nm in length) have generated interest due to the unique transport properties observed that are not seen in microscale channels. These nanoscale dependent transport properties include transverse electromigration arising from partial electrical double layer overlap, enhanced solute/wall interactions due to the small channel diameter, and field-dependent intermittent motion produced by surface roughness. In this study, the electrokinetic transport properties of deoxynucleotide monophosphates (dNMPs) were investigated, including the effects of electric field strength, surface effects, and composition of the carrier electrolyte (ionic concentration and pH). The dNMPs were labeled with a fluorescent reporter (ATTO 532) to allow tracking of the electrokinetic transport of the dNMPs through a thermoplastic nanochannel fabricated via nanoimprinting (110 nm × 110 nm, width × depth, and 100 μm in length). We discovered that the transport properties in plastic nanochannels of the dye-labeled dNMPs produced differences in their apparent mobilities that were not seen using microscale columns. We built histograms for each dNMP from their apparent mobilities under different operating conditions and fit the histograms to Gaussian functions from which the separation resolution could be deduced as a metric to gage the ability to identify the molecule based on their apparent mobility. We found that the resolution ranged from 0.73 to 2.13 at pH = 8.3. Changing the carrier electrolyte pH > 10 significantly improved separation resolution (0.80-4.84) and reduced the standard deviation in the Gaussian fit to the apparent mobilities. At low buffer concentrations, decreases in separation resolution and increased standard deviations in Gaussian fits to the apparent mobilities of dNMPs were observed due to the increased thickness of the electric double layer leading to a partial parabolic flow profile. The results secured for the dNMPs in thermoplastic nanochannels revealed a high identification efficiency (>99%) in most cases for the dNMPs due to differences in their apparent mobilities when using nanochannels, which could not be achieved using microscale columns. Copyright © 2018. Published by Elsevier B.V.

  20. A new algorithm to reduce noise in microscopy images implemented with a simple program in python.

    PubMed

    Papini, Alessio

    2012-03-01

    All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.

  1. Generic evolution of mixing in heterogeneous media

    NASA Astrophysics Data System (ADS)

    De Dreuzy, J.; Carrera, J.; Dentz, M.; Le Borgne, T.

    2011-12-01

    Mixing in heterogeneous media results from the competition bewteen flow fluctuations and local scale diffusion. Flow fluctuations quickly create concentration contrasts and thus heterogeneity of the concentration field, which is slowly homogenized by local scale diffusion. Mixing first deviates from Gaussian mixing, which represents the potential mixing induced by spreading before approaching it. This deviation fundamentally expresses the evolution of the interaction between spreading and local scale diffusion. We characterize it by the ratio γ of the non-Gaussian to the Gaussian mixing states. We define the Gaussian mixing state as the integrated squared concentration of the Gaussian plume that has the same longitudinal dispersion as the real plume. The non-Gaussian mixing state is the difference between the overall mixing state defined as the integrated squared concentration and the Gaussian mixing state. The main advantage of this definition is to use the full knowledge previously acquired on dispersion for characterizing mixing even when the solute concentration field is highly non Gaussian. Using high precision numerical simulations, we show that γ quickly increases, peaks and slowly decreases. γ can be derived from two scales characterizing spreading and local mixing, at least for large flux-weighted solute injection conditions into classically log-normal Gaussian correlated permeability fields. The spreading scale is directly related to the longitudinal dispersion. The local mixing scale is the largest scale over which solute concentrations can be considered locally uniform. More generally, beyond the characteristics of its maximum, γ turns out to have a highly generic scaling form. Its fast increase and slow decrease depend neither on the heterogeneity level, nor on the ratio of diffusion to advection, nor on the injection conditions. They might even not depend on the particularities of the flow fields as the same generic features also prevail for Taylor dispersion. This generic characterization of mixing can offer new ways to set up transport equations that honor not only advection and spreading (dispersion), but also mixing.

  2. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    NASA Astrophysics Data System (ADS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-11-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.

  3. Comparing mode-crosstalk and mode-dependent loss of laterally displaced orbital angular momentum and Hermite-Gaussian modes for free-space optical communication.

    PubMed

    Ndagano, Bienvenu; Mphuthi, Nokwazi; Milione, Giovanni; Forbes, Andrew

    2017-10-15

    There is interest in using orbital angular momentum (OAM) modes to increase the data speed of free-space optical communication. A prevalent challenge is the mitigation of mode-crosstalk and mode-dependent loss that is caused by the modes' lateral displacement at the data receiver. Here, the mode-crosstalk and mode-dependent loss of laterally displaced OAM modes (LG 0,+1 , LG 0,-1 ) are experimentally compared to that of a Hermite-Gaussian (HG) mode subset (HG 0,1 , HG 1,0 ). It is shown, for an aperture larger than the modes' waist sizes, some of the HG modes can experience less mode-crosstalk and mode-dependent loss when laterally displaced along a symmetry axis. It is also shown, over a normal distribution of lateral displacements whose standard deviation is 2× the modes' waist sizes, on average, the HG modes experience 66% less mode-crosstalk and 17% less mode-dependent loss.

  4. Optical analysis of cylindrical-parabolic concentrators: validity limits for models of solar disk intensity.

    PubMed

    Nicolás, R O

    1987-09-15

    Different optical analysis of cylindrical-parabolic concentrators were made by utilizing four models of intensity distribution of the solar disk, i.e., square, uniform, real, and Gaussian. In this paper, the validity conditions using such distributions are determined by calculating each model of the intensity distribution on the receiver plane of perfect and nonperfect cylindrical-parabolic concentrators. We call nonperfect concentrators those in which the normal to each differential element of the specular surface departs from its correct position by an angle sigma(epsilon), the possible values of which follow a Gaussian distribution of mean value epsilon and standard deviation sigma(epsilon). In particular, the results obtained with the models considered for a concentrator with an aperture half-angle of 45 degrees are shown and compared. An important conclusion is that for sigma(epsilon) greater, similar 4 mrad, in some cases for sigma(epsilon) greater, similar 2 mrad, the results obtained are practically independent of the model used.

  5. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  7. Stochastic resonance in a piecewise nonlinear model driven by multiplicative non-Gaussian noise and additive white noise

    NASA Astrophysics Data System (ADS)

    Guo, Yongfeng; Shen, Yajun; Tan, Jianguo

    2016-09-01

    The phenomenon of stochastic resonance (SR) in a piecewise nonlinear model driven by a periodic signal and correlated noises for the cases of a multiplicative non-Gaussian noise and an additive Gaussian white noise is investigated. Applying the path integral approach, the unified colored noise approximation and the two-state model theory, the analytical expression of the signal-to-noise ratio (SNR) is derived. It is found that conventional stochastic resonance exists in this system. From numerical computations we obtain that: (i) As a function of the non-Gaussian noise intensity, the SNR is increased when the non-Gaussian noise deviation parameter q is increased. (ii) As a function of the Gaussian noise intensity, the SNR is decreased when q is increased. This demonstrates that the effect of the non-Gaussian noise on SNR is different from that of the Gaussian noise in this system. Moreover, we further discuss the effect of the correlation time of the non-Gaussian noise, cross-correlation strength, the amplitude and frequency of the periodic signal on SR.

  8. Gaussian mixture models as flux prediction method for central receivers

    NASA Astrophysics Data System (ADS)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  9. Non-Gaussian behavior in jamming / unjamming transition in dense granular materials

    NASA Astrophysics Data System (ADS)

    Atman, A. P. F.; Kolb, E.; Combe, G.; Paiva, H. A.; Martins, G. H. B.

    2013-06-01

    Experiments of penetration of a cylindrical intruder inside a bidimensional dense and disordered granular media were reported recently showing the jamming / unjamming transition. In the present work, we perform molecular dynamics simulations with the same geometry in order to assess both kinematic and static features of jamming / unjamming transition. We study the statistics of the particles velocities at the neighborhood of the intruder to evince that both experiments and simulations present the same qualitative behavior. We observe that the probability density functions (PDF) of velocities deviate from Gaussian depending on the packing fraction of the granular assembly. In order to quantify these deviations we consider a q-Gaussian (Tsallis) function to fit the PDF's. The q-value can be an indication of the presence of long range correlations along the system. We compare the fitted PDF's obtained with those obtained using the stretched exponential, and sketch some conclusions concerning the nature of the correlations along a granular confined flow.

  10. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  11. Langevin equation with fluctuating diffusivity: A two-state model

    NASA Astrophysics Data System (ADS)

    Miyaguchi, Tomoshige; Akimoto, Takuma; Yamamoto, Eiji

    2016-07-01

    Recently, anomalous subdiffusion, aging, and scatter of the diffusion coefficient have been reported in many single-particle-tracking experiments, though the origins of these behaviors are still elusive. Here, as a model to describe such phenomena, we investigate a Langevin equation with diffusivity fluctuating between a fast and a slow state. Namely, the diffusivity follows a dichotomous stochastic process. We assume that the sojourn time distributions of these two states are given by power laws. It is shown that, for a nonequilibrium ensemble, the ensemble-averaged mean-square displacement (MSD) shows transient subdiffusion. In contrast, the time-averaged MSD shows normal diffusion, but an effective diffusion coefficient transiently shows aging behavior. The propagator is non-Gaussian for short time and converges to a Gaussian distribution in a long-time limit; this convergence to Gaussian is extremely slow for some parameter values. For equilibrium ensembles, both ensemble-averaged and time-averaged MSDs show only normal diffusion and thus we cannot detect any traces of the fluctuating diffusivity with these MSDs. Therefore, as an alternative approach to characterizing the fluctuating diffusivity, the relative standard deviation (RSD) of the time-averaged MSD is utilized and it is shown that the RSD exhibits slow relaxation as a signature of the long-time correlation in the fluctuating diffusivity. Furthermore, it is shown that the RSD is related to a non-Gaussian parameter of the propagator. To obtain these theoretical results, we develop a two-state renewal theory as an analytical tool.

  12. SU-E-T-558: Assessing the Effect of Inter-Fractional Motion in Esophageal Sparing Plans.

    PubMed

    Williamson, R; Bluett, J; Niedzielski, J; Liao, Z; Gomez, D; Court, L

    2012-06-01

    To compare esophageal dose distributions in esophageal sparing IMRT plans with predicted dose distributions which include the effect of inter-fraction motion. Seven lung cancer patients were used, each with a standard and an esophageal sparing plan (74Gy, 2Gy fractions). The average max dose to esophagus was 8351cGy and 7758cGy for the standard and sparing plans, respectively. The average length of esophagus for which the total circumference was treated above 60Gy (LETT60) was 9.4cm in the standard plans and 5.8cm in the sparing plans. In order to simulate inter-fractional motion, a three-dimensional rigid shift was applied to the calculated dose field. A simulated course of treatment consisted of a single systematic shift applied throughout the treatment as well a random shift for each of the 37 fractions. Both systematic and random shifts were generated from Gaussian distributions of 3mm and 5mm standard deviation. Each treatment course was simulated 1000 times to obtain an expected distribution of the delivered dose. Simulated treatment dose received by the esophagus was less than dose seen in the treatment plan. The average reduction in maximum esophageal dose for the standard plans was 234cGy and 386cGY for the 3mm and 5mm Gaussian distributions, respectively. The average reduction in LETT60 was 0.6cm and 1.7cm, for the 3mm and 5mm distributions respectively. For the esophageal sparing plans, the average reduction in maximum esophageal dose was 94cGy and 202cGy for 3mm and 5mm Gaussian distributions, respectively. The average change in LETT60 for the esophageal sparing plans was smaller, at 0.1cm (increase) and 0.6cm (reduction), for the 3mm and 5mm distributions, respectively. Interfraction motion consistently reduced the maximum doses to the esophagus for both standard and esophageal sparing plans. © 2012 American Association of Physicists in Medicine.

  13. Evidence for D0-D(0) mixing using the CDF II detector.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; González, B Alvarez; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyria, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Denis, R St; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2008-03-28

    We measure the time dependence of the ratio of decay rates for the rare decay D{0}-->K{+}pi{-} to the Cabibbo-favored decay D{0}-->K{-}pi;{+}. A signal of 12.7x10;{3} D{0}-->K{+}pi{-} decays was obtained using the Collider Detector at Fermilab II detector at the Fermilab Tevatron with an integrated luminosity of 1.5 fb;{-1}. We measure the D0-D[over ]{0} mixing parameters (R_{D},y{'},x{'2}), and find that the data are inconsistent with the no-mixing hypothesis with a probability equivalent to 3.8 Gaussian standard deviations.

  14. SU-F-P-23: Setup Uncertainties for the Lung Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Q; Vigneri, P; Madu, C

    2016-06-15

    Purpose: The Exactrack X-ray system with six degree-of-freedom (6DoF) adjustment ability can be used for setup of lung stereotactic body radiation therapy. The setup uncertainties from ExacTrack 6D system were analyzed. Methods: The Exactrack X-ray 6D image guided radiotherapy system is used in our clinic. The system is an integration of 2 subsystems: (1): an infrared based optical position system and (2) a radiography kV x-ray imaging system. The infrared system monitors reflective body markers on the patient’s skin to assistant in the initial setup. The radiographic kV devices were used for patient positions verification and adjustment. The position verificationmore » was made by fusing the radiographs with the digitally reconstructed radiograph (DRR) images generated by simulation CT images using 6DoF fusion algorithms. Those results were recorded in our system. Gaussian functions were used to fit the data. Results: For 37 lung SBRT patients, the image registration results for the initial setup by using surface markers and for the verifications, were measured. The results were analyzed for 143 treatments. The mean values for the lateral, longitudinal, vertical directions were 0.1, 0.3 and 0.3mm, respectively. The standard deviations for the lateral, longitudinal and vertical directions were 0.62, 0.78 and 0.75mm respectively. The mean values for the rotations around lateral, longitudinal and vertical directions were 0.1, 0.2 and 0.4 degrees respectively, with standard deviations of 0.36, 0.34, and 0.42 degrees. Conclusion: The setup uncertainties for the lung SBRT cases by using Exactrack 6D system were analyzed. The standard deviations of the setup errors were within 1mm for all three directions, and the standard deviations for rotations were within 0.5 degree.« less

  15. On the Distribution of Protein Refractive Index Increments

    PubMed Central

    Zhao, Huaying; Brown, Patrick H.; Schuck, Peter

    2011-01-01

    The protein refractive index increment, dn/dc, is an important parameter underlying the concentration determination and the biophysical characterization of proteins and protein complexes in many techniques. In this study, we examine the widely used assumption that most proteins have dn/dc values in a very narrow range, and reappraise the prediction of dn/dc of unmodified proteins based on their amino acid composition. Applying this approach in large scale to the entire set of known and predicted human proteins, we obtain, for the first time, to our knowledge, an estimate of the full distribution of protein dn/dc values. The distribution is close to Gaussian with a mean of 0.190 ml/g (for unmodified proteins at 589 nm) and a standard deviation of 0.003 ml/g. However, small proteins <10 kDa exhibit a larger spread, and almost 3000 proteins have values deviating by more than two standard deviations from the mean. Due to the widespread availability of protein sequences and the potential for outliers, the compositional prediction should be convenient and provide greater accuracy than an average consensus value for all proteins. We discuss how this approach should be particularly valuable for certain protein classes where a high dn/dc is coincidental to structural features, or may be functionally relevant such as in proteins of the eye. PMID:21539801

  16. On the distribution of protein refractive index increments.

    PubMed

    Zhao, Huaying; Brown, Patrick H; Schuck, Peter

    2011-05-04

    The protein refractive index increment, dn/dc, is an important parameter underlying the concentration determination and the biophysical characterization of proteins and protein complexes in many techniques. In this study, we examine the widely used assumption that most proteins have dn/dc values in a very narrow range, and reappraise the prediction of dn/dc of unmodified proteins based on their amino acid composition. Applying this approach in large scale to the entire set of known and predicted human proteins, we obtain, for the first time, to our knowledge, an estimate of the full distribution of protein dn/dc values. The distribution is close to Gaussian with a mean of 0.190 ml/g (for unmodified proteins at 589 nm) and a standard deviation of 0.003 ml/g. However, small proteins <10 kDa exhibit a larger spread, and almost 3000 proteins have values deviating by more than two standard deviations from the mean. Due to the widespread availability of protein sequences and the potential for outliers, the compositional prediction should be convenient and provide greater accuracy than an average consensus value for all proteins. We discuss how this approach should be particularly valuable for certain protein classes where a high dn/dc is coincidental to structural features, or may be functionally relevant such as in proteins of the eye. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. SU-F-T-158: Experimental Characterization of Field Size Dependence of Dose and Lateral Beam Profiles of Scanning Proton and Carbon Ion Beams for Empirical Model in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Hsi, W; Zhao, J

    2016-06-15

    Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less

  18. The modulation transfer function and signal-to-noise ratio of different digital filters: a technical approach.

    PubMed

    Brüllmann, D D; d'Hoedt, B

    2011-05-01

    The aim of this study was to illustrate the influence of digital filters on the signal-to-noise ratio (SNR) and modulation transfer function (MTF) of digital images. The article will address image pre-processing that may be beneficial for the production of clinically useful digital radiographs with lower radiation dose. Three filters, an arithmetic mean filter, a median filter and a Gaussian filter (standard deviation (SD) = 0.4), with kernel sizes of 3 × 3 pixels and 5 × 5 pixels were tested. Synthetic images with exactly increasing amounts of Gaussian noise were created to gather linear regression of SNR before and after application of digital filters. Artificial stripe patterns with defined amounts of line pairs per millimetre were used to calculate MTF before and after the application of the digital filters. The Gaussian filter with a 5 × 5 kernel size caused the highest noise suppression (SNR increased from 2.22, measured in the synthetic image, to 11.31 in the filtered image). The smallest noise reduction was found with the 3 × 3 median filter. The application of the median filters resulted in no changes in MTF at the different resolutions but did result in the deletion of smaller structures. The 5 × 5 Gaussian filter and the 5 × 5 arithmetic mean filter showed the strongest changes of MTF. The application of digital filters can improve the SNR of a digital sensor; however, MTF can be adversely affected. As such, imaging systems should not be judged solely on their quoted spatial resolutions because pre-processing may influence image quality.

  19. Inefficiency in Latin-American market indices

    NASA Astrophysics Data System (ADS)

    Zunino, L.; Tabak, B. M.; Pérez, D. G.; Garavaglia, M.; Rosso, O. A.

    2007-11-01

    We explore the deviations from efficiency in the returns and volatility returns of Latin-American market indices. Two different approaches are considered. The dynamics of the Hurst exponent is obtained via a wavelet rolling sample approach, quantifying the degree of long memory exhibited by the stock market indices under analysis. On the other hand, the Tsallis q entropic index is measured in order to take into account the deviations from the Gaussian hypothesis. Different dynamic rankings of inefficieny are obtained, each of them contemplates a different source of inefficiency. Comparing with the results obtained for a developed country (US), we confirm a similar degree of long-range dependence for our emerging markets. Moreover, we show that the inefficiency in the Latin-American countries comes principally from the non-Gaussian form of the probability distributions.

  20. Statistics of velocity fluctuations of Geldart A particles in a circulating fluidized bed riser

    DOE PAGES

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    2017-11-21

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  1. Visual photometry: accuracy and precision

    NASA Astrophysics Data System (ADS)

    Whiting, Alan

    2018-01-01

    Visual photometry, estimation by eye of the brightness of stars, remains an important source of data even in the age of widespread precision instruments. However, the eye-brain system differs from electronic detectors and its results may be expected to differ in several respects. I examine a selection of well-observed variables from the AAVSO database to determine several internal characteristics of this data set. Visual estimates scatter around the fitted curves with a standard deviation of 0.14 to 0.34 magnitudes, most clustered in the 0.21-0.25 range. The variation of the scatter does not seem to correlate with color, type of variable, or depth or speed of variation of the star’s brightness. The scatter of an individual observer’s observations changes from star to star, in step with the overall scatter. The shape of the deviations from the fitted curve is non-Gaussian, with positive excess kurtosis (more outlying observations). These results have implications for use of visual data, as well as other citizen science efforts.

  2. Brownian motion under dynamic disorder: effects of memory on the decay of the non-Gaussianity parameter

    NASA Astrophysics Data System (ADS)

    Tyagi, Neha; Cherayil, Binny J.

    2018-03-01

    The increasingly widespread occurrence in complex fluids of particle motion that is both Brownian and non-Gaussian has recently been found to be successfully modeled by a process (frequently referred to as ‘diffusing diffusivity’) in which the white noise that governs Brownian diffusion is itself stochastically modulated by either Ornstein–Uhlenbeck dynamics or by two-state noise. But the model has so far not been able to account for an aspect of non-Gaussian Brownian motion that is also commonly observed: a non-monotonic decay of the parameter that quantifies the extent of deviation from Gaussian behavior. In this paper, we show that the inclusion of memory effects in the model—via a generalized Langevin equation—can rationalise this phenomenon.

  3. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.

  4. Effect of inhomogeneous Schottky barrier height of SnO2 nanowires device

    NASA Astrophysics Data System (ADS)

    Amorim, Cleber A.; Bernardo, Eric P.; Leite, Edson R.; Chiquito, Adenilson J.

    2018-05-01

    The current–voltage (I–V) characteristics of metal–semiconductor junction (Au–Ni/SnO2/Au–Ni) Schottky barrier in SnO2 nanowires were investigated over a wide temperature range. By using the Schottky–Mott model, the zero bias barrier height Φ B was estimated from I–V characteristics, and it was found to increase with increasing temperature; on the other hand the ideality factor (n) was found to decrease with increasing temperature. The variation in the Schottky barrier and n was attributed to the spatial inhomogeneity of the Schottky barrier height. The experimental I–V characteristics exhibited a Gaussian distribution having mean barrier heights {\\overline{{{Φ }}}}B of 0.30 eV and standard deviation σ s of 60 meV. Additionally, the Richardson modified constant was obtained to be 70 A cm‑2 K‑2, leading to an effective mass of 0.58m 0. Consequently, the temperature dependence of I–V characteristics of the SnO2 nanowire devices can be successfully explained on the Schottky–Mott theory framework taking into account a Gaussian distribution of barrier heights.

  5. The determination of modified barrier heights in Ti/GaN nano-Schottky diodes at high temperature.

    PubMed

    Lee, Seung-Yong; Kim, Tae-Hong; Chol, Nam-Kyu; Seong, Han-Kyu; Choi, Heon-Jin; Ahn, Byung-Guk; Lee, Sang-Kwon

    2008-10-01

    We have investigated the size-effect of the nano-Schottky diodes on the electrical transport properties and the temperature-dependent current transport mechanism in a metal-semiconductor nanowire junction (a Ti/GaN nano-Schottky diode) using current-voltage characterization in the range of 300-423 K. We found that the modified mean Schottky barrier height (SBH) was approximately 0.7 eV with a standard deviation of approximately 0.14 V using a Gaussian distribution model of the barrier heights. The slightly high value of the modified mean SBH (approximately 0.11 eV) compared to the results from the thin-film based Ti/GaN Schottky diodes could be due to an additional oxide layer at the interface between the Ti and GaN nanowires. Moreover, we found that the abnormal behavior of the barrier heights and the ideality factors in a Ti/GaN nano-Schottky diode at a temperature below 423 K could be explained by a combination of the enhancement of the tunneling current and a model with a Gaussian distribution of the barrier heights.

  6. Steady-state distributions of probability fluxes on complex networks

    NASA Astrophysics Data System (ADS)

    Chełminiak, Przemysław; Kurzyński, Michał

    2017-02-01

    We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wulff, J; Huggins, A

    Purpose: The shape of a single beam in proton PBS influences the resulting dose distribution. Spot profiles are modelled as two-dimensional Gaussian (single/ double) distributions in treatment planning systems (TPS). Impact of slight deviations from an ideal Gaussian on resulting dose distributions is typically assumed to be small due to alleviation by multiple Coulomb scattering (MCS) in tissue and superposition of many spots. Quantitative limits are however not clear per se. Methods: A set of 1250 deliberately deformed profiles with sigma=4 mm for a Gaussian fit were constructed. Profiles and fit were normalized to the same area, resembling output calibrationmore » in the TPS. Depth-dependent MCS was considered. The deviation between deformed and ideal profiles was characterized by root-mean-squared deviation (RMSD), skewness/ kurtosis (SK) and full-width at different percentage of maximum (FWxM). The profiles were convolved with different fluence patterns (regular/ random) resulting in hypothetical dose distributions. The resulting deviations were analyzed by applying a gamma-test. Results were compared to measured spot profiles. Results: A clear correlation between pass-rate and profile metrics could be determined. The largest impact occurred for a regular fluence-pattern with increasing distance between single spots, followed by a random distribution of spot weights. The results are strongly dependent on gamma-analysis dose and distance levels. Pass-rates of >95% at 2%/2 mm and 40 mm depth (=70 MeV) could only be achieved for RMSD<10%, deviation in FWxM at 20% and root of quadratic sum of SK <0.8. As expected the results improve for larger depths. The trends were well resembled for measured spot profiles. Conclusion: All measured profiles from ProBeam sites passed the criteria. Given the fact, that beam-line tuning can result shape distortions, the derived criteria represent a useful QA tool for commissioning and design of future beam-line optics.« less

  8. Search for new phenomena with photon+jet events in proton-proton collisions at √s = 13 TeV with the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2016-03-08

    A search is performed for the production of high-mass resonances decaying into a photon and a jet in 3.2 fb -1 of proton-proton collisions at a centre-of-mass energy of √s =13 TeV collected by the ATLAS detector at the Large Hadron Collider. Selected events have an isolated photon and a jet, each with transverse momentum above 150 GeV. No significant deviation of the γ+jet invariant mass distribution from the background-only hypothesis is found. Limits are set at 95% confidence level on the cross sections of generic Gaussian-shaped signals and of a few benchmark phenomena beyond the Standard Model: excited quarksmore » with vector-like couplings to the Standard Model particles, and non-thermal quantum black holes in two models of extra spatial dimensions. The minimum excluded visible cross sections for Gaussian-shaped resonances with width-to-mass ratios of 2% decrease from about 6 fb for a mass of 1.5 TeV to about 0.8 fb for a mass of 5 TeV. The minimum excluded visible cross sections for Gaussian-shaped resonances with width-to-mass ratios of 15% decrease from about 50 fb for a mass of 1.5 TeV to about 1.0 fb for a mass of 5 TeV. As a result, excited quarks are excluded below masses of 4.4 TeV, and non-thermal quantum black holes are excluded below masses of 3.8 (6.2) TeV for Randall-Sundrum (Arkani-Hamed-Dimopoulous-Dvali) models with one (six) extra dimensions.« less

  9. Empirical Model of Precipitating Ion Oval

    NASA Astrophysics Data System (ADS)

    Goldstein, Jerry

    2017-10-01

    In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.

  10. Uncovering the single top: observation of electroweak top quark production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitez, Jorge Armando

    2009-01-01

    The top quark is generally produced in quark and anti-quark pairs. However, the Standard Model also predicts the production of only one top quark which is mediated by the electroweak interaction, known as 'Single Top'. Single Top quark production is important because it provides a unique and direct way to measure the CKM matrix element V tb, and can be used to explore physics possibilities beyond the Standard Model predictions. This dissertation presents the results of the observation of Single Top using 2.3 fb -1 of Data collected with the D0 detector at the Fermilab Tevatron collider. The analysis includes the Single Top muon+jets and electron+jets final states and employs Boosted Decision Tress as a method to separate the signal from the background. The resulting Single Top cross section measurement is: (1) σ(pmore » $$\\bar{p}$$→ tb + X, tqb + X) = 3.74 -0.74 +0.95 pb, where the errors include both statistical and systematic uncertainties. The probability to measure a cross section at this value or higher in the absence of signal is p = 1.9 x 10 -6. This corresponds to a standard deviation Gaussian equivalence of 4.6. When combining this result with two other analysis methods, the resulting cross section measurement is: (2) σ(p$$\\bar{p}$$ → tb + X, tqb + X) = 3.94 ± 0.88 pb, and the corresponding measurement significance is 5.0 standard deviations.« less

  11. Anomalous optogalvanic line shapes of argon metastable transitions in a hollow cathode lamp

    NASA Technical Reports Server (NTRS)

    Ruyten, W. M.

    1993-01-01

    Anomalous optogalvanic line shapes were observed in a commercial hollow cathode lamp containing argon buffer gas. Deviations from Gaussian line shapes were particularly strong for transitions originating from the 3P2 metastable level of argon. The anomalous line shapes can be described reasonably well by the assumption that two regions in the discharge are excited simultaneously, each giving rise to a purely Gaussian line shape, but with different polarities, amplitudes, and linewidths.

  12. Electric dipole moment of the deuteron in the standard model with NN - ΛN - ΣN coupling

    NASA Astrophysics Data System (ADS)

    Yamanaka, Nodoka

    2017-07-01

    We calculate the electric dipole moment (EDM) of the deuteron in the standard model with | ΔS | = 1 interactions by taking into account the NN - ΛN - ΣN channel coupling, which is an important nuclear level systematics. The two-body problem is solved with the Gaussian Expansion Method using the realistic Argonne v18 nuclear force and the YN potential which can reproduce the binding energies of Λ3H, Λ3He, and Λ4He. The | ΔS | = 1 interbaryon potential is modeled by the one-meson exchange process. It is found that the deuteron EDM is modified by less than 10%, and the main contribution to this deviation is due to the polarization of the hyperon-nucleon channels. The effect of the YN interaction is small, and treating ΛN and ΣN channels as free is a good approximation for the EDM of the deuteron.

  13. Measurement of Form-Factor-Independent Observables in the Decay B0→K*0μ+μ-

    NASA Astrophysics Data System (ADS)

    Aaij, R.; Adeva, B.; Adinolfi, M.; Adrover, C.; Affolder, A.; Ajaltouni, Z.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A., Jr.; Amato, S.; Amerio, S.; Amhis, Y.; Anderlini, L.; Anderson, J.; Andreassen, R.; Andrews, J. E.; Appleby, R. B.; Aquines Gutierrez, O.; Archilli, F.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Bachmann, S.; Back, J. J.; Baesso, C.; Balagura, V.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Bauer, Th.; Bay, A.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Belogurov, S.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Benton, J.; Berezhnoy, A.; Bernet, R.; Bettler, M.-O.; van Beuzekom, M.; Bien, A.; Bifani, S.; Bird, T.; Bizzeti, A.; Bjørnstad, P. M.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Bondar, A.; Bondar, N.; Bonivento, W.; Borghi, S.; Borgia, A.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Brambach, T.; van den Brand, J.; Bressieux, J.; Brett, D.; Britsch, M.; Britton, T.; Brook, N. H.; Brown, H.; Burducea, I.; Bursche, A.; Busetto, G.; Buytaert, J.; Cadeddu, S.; Callot, O.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carranza-Mejia, H.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Castillo Garcia, L.; Cattaneo, M.; Cauet, Ch.; Cenci, R.; Charles, M.; Charpentier, Ph.; Chen, P.; Chiapolini, N.; Chrzaszcz, M.; Ciba, K.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coca, C.; Coco, V.; Cogan, J.; Cogneras, E.; Collins, P.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombes, M.; Coquereau, S.; Corti, G.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Cunliffe, S.; Currie, R.; D'Ambrosio, C.; David, P.; David, P. N. Y.; Davis, A.; De Bonis, I.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Silva, W.; De Simone, P.; Decamp, D.; Deckenhoff, M.; Del Buono, L.; Déléage, N.; Derkach, D.; Deschamps, O.; Dettori, F.; Di Canto, A.; Dijkstra, H.; Dogaru, M.; Donleavy, S.; Dordei, F.; Dosil Suárez, A.; Dossett, D.; Dovbnya, A.; Dupertuis, F.; Durante, P.; Dzhelyadin, R.; Dziurda, A.; Dzyuba, A.; Easo, S.; Egede, U.; Egorychev, V.; Eidelman, S.; van Eijk, D.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; El Rifai, I.; Elsasser, Ch.; Falabella, A.; Färber, C.; Fardell, G.; Farinelli, C.; Farry, S.; Ferguson, D.; Fernandez Albor, V.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fiore, M.; Fitzpatrick, C.; Fontana, M.; Fontanelli, F.; Forty, R.; Francisco, O.; Frank, M.; Frei, C.; Frosini, M.; Furcas, S.; Furfaro, E.; Gallas Torreira, A.; Galli, D.; Gandelman, M.; Gandini, P.; Gao, Y.; Garofoli, J.; Garosi, P.; Garra Tico, J.; Garrido, L.; Gaspar, C.; Gauld, R.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gibson, V.; Giubega, L.; Gligorov, V. V.; Göbel, C.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gorbounov, P.; Gordon, H.; Gotti, C.; Grabalosa Gándara, M.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graziani, G.; Grecu, A.; Greening, E.; Gregson, S.; Griffith, P.; Grünberg, O.; Gui, B.; Gushchin, E.; Guz, Yu.; Gys, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hall, S.; Hamilton, B.; Hampson, T.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; Hartmann, T.; He, J.; Head, T.; Heijne, V.; Hennessy, K.; Henrard, P.; Hernando Morata, J. A.; van Herwijnen, E.; Hess, M.; Hicheur, A.; Hicks, E.; Hill, D.; Hoballah, M.; Hombach, C.; Hopchev, P.; Hulsbergen, W.; Hunt, P.; Huse, T.; Hussain, N.; Hutchcroft, D.; Hynds, D.; Iakovenko, V.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jaeger, A.; Jans, E.; Jaton, P.; Jawahery, A.; Jing, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Kaballo, M.; Kandybei, S.; Kanso, W.; Karacson, M.; Karbach, T. M.; Kenyon, I. R.; Ketel, T.; Keune, A.; Khanji, B.; Kochebina, O.; Komarov, I.; Koopman, R. F.; Koppenburg, P.; Korolev, M.; Kozlinskiy, A.; Kravchuk, L.; Kreplin, K.; Kreps, M.; Krocker, G.; Krokovny, P.; Kruse, F.; Kucharczyk, M.; Kudryavtsev, V.; Kurek, K.; Kvaratskheliya, T.; La Thi, V. N.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lambert, D.; Lambert, R. W.; Lanciotti, E.; Lanfranchi, G.; Langenbruch, C.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Lees, J.-P.; Lefèvre, R.; Leflat, A.; Lefrançois, J.; Leo, S.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, Y.; Li Gioi, L.; Liles, M.; Lindner, R.; Linn, C.; Liu, B.; Liu, G.; Lohn, S.; Longstaff, I.; Lopes, J. H.; Lopez-March, N.; Lu, H.; Lucchesi, D.; Luisier, J.; Luo, H.; Machefert, F.; Machikhiliyan, I. V.; Maciuc, F.; Maev, O.; Malde, S.; Manca, G.; Mancinelli, G.; Maratas, J.; Marconi, U.; Marino, P.; Märki, R.; Marks, J.; Martellotti, G.; Martens, A.; Martín Sánchez, A.; Martinelli, M.; Martinez Santos, D.; Martins Tostes, D.; Martynov, A.; Massafferri, A.; Matev, R.; Mathe, Z.; Matteuzzi, C.; Maurice, E.; Mazurov, A.; McCarthy, J.; McNab, A.; McNulty, R.; McSkelly, B.; Meadows, B.; Meier, F.; Meissner, M.; Merk, M.; Milanes, D. A.; Minard, M.-N.; Molina Rodriguez, J.; Monteil, S.; Moran, D.; Morawski, P.; Mordà, A.; Morello, M. J.; Mountain, R.; Mous, I.; Muheim, F.; Müller, K.; Muresan, R.; Muryn, B.; Muster, B.; Naik, P.; Nakada, T.; Nandakumar, R.; Nasteva, I.; Needham, M.; Neubert, S.; Neufeld, N.; Nguyen, A. D.; Nguyen, T. D.; Nguyen-Mau, C.; Nicol, M.; Niess, V.; Niet, R.; Nikitin, N.; Nikodem, T.; Nomerotski, A.; Novoselov, A.; Oblakowska-Mucha, A.; Obraztsov, V.; Oggero, S.; Ogilvy, S.; Okhrimenko, O.; Oldeman, R.; Orlandea, M.; Otalora Goicochea, J. M.; Owen, P.; Oyanguren, A.; Pal, B. K.; Palano, A.; Palczewski, T.; Palutan, M.; Panman, J.; Papanestis, A.; Pappagallo, M.; Parkes, C.; Parkinson, C. J.; Passaleva, G.; Patel, G. D.; Patel, M.; Patrick, G. N.; Patrignani, C.; Pavel-Nicorescu, C.; Pazos Alvarez, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Perez Trigo, E.; Pérez-Calero Yzquierdo, A.; Perret, P.; Perrin-Terrin, M.; Pescatore, L.; Pesen, E.; Petridis, K.; Petrolini, A.; Phan, A.; Picatoste Olloqui, E.; Pietrzyk, B.; Pilař, T.; Pinci, D.; Playfer, S.; Plo Casasus, M.; Polci, F.; Polok, G.; Poluektov, A.; Polycarpo, E.; Popov, A.; Popov, D.; Popovici, B.; Potterat, C.; Powell, A.; Prisciandaro, J.; Pritchard, A.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Punzi, G.; Qian, W.; Rademacker, J. H.; Rakotomiaramanana, B.; Rangel, M. S.; Raniuk, I.; Rauschmayr, N.; Raven, G.; Redford, S.; Reid, M. M.; dos Reis, A. C.; Ricciardi, S.; Richards, A.; Rinnert, K.; Rives Molina, V.; Roa Romero, D. A.; Robbe, P.; Roberts, D. A.; Rodrigues, E.; Rodriguez Perez, P.; Roiser, S.; Romanovsky, V.; Romero Vidal, A.; Rouvinet, J.; Ruf, T.; Ruffini, F.; Ruiz, H.; Ruiz Valls, P.; Sabatino, G.; Saborido Silva, J. J.; Sagidova, N.; Sail, P.; Saitta, B.; Salustino Guimaraes, V.; Sanmartin Sedes, B.; Sannino, M.; Santacesaria, R.; Santamarina Rios, C.; Santovetti, E.; Sapunov, M.; Sarti, A.; Satriano, C.; Satta, A.; Savrie, M.; Savrina, D.; Schaack, P.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmidt, B.; Schneider, O.; Schopper, A.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Seco, M.; Semennikov, A.; Senderowska, K.; Sepp, I.; Serra, N.; Serrano, J.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shatalov, P.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, O.; Shevchenko, V.; Shires, A.; Silva Coutinho, R.; Sirendi, M.; Skwarnicki, T.; Smith, N. A.; Smith, E.; Smith, J.; Smith, M.; Sokoloff, M. D.; Soler, F. J. P.; Soomro, F.; Souza, D.; Souza De Paula, B.; Spaan, B.; Sparkes, A.; Spradlin, P.; Stagni, F.; Stahl, S.; Steinkamp, O.; Stevenson, S.; Stoica, S.; Stone, S.; Storaci, B.; Straticiuc, M.; Straumann, U.; Subbiah, V. K.; Sun, L.; Swientek, S.; Syropoulos, V.; Szczekowski, M.; Szczypka, P.; Szumlak, T.; T'Jampens, S.; Teklishyn, M.; Teodorescu, E.; Teubert, F.; Thomas, C.; Thomas, E.; van Tilburg, J.; Tisserand, V.; Tobin, M.; Tolk, S.; Tonelli, D.; Topp-Joergensen, S.; Torr, N.; Tournefier, E.; Tourneur, S.; Tran, M. T.; Tresch, M.; Tsaregorodtsev, A.; Tsopelas, P.; Tuning, N.; Ubeda Garcia, M.; Ukleja, A.; Urner, D.; Ustyuzhanin, A.; Uwer, U.; Vagnoni, V.; Valenti, G.; Vallier, A.; Van Dijk, M.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vázquez Sierra, C.; Vecchi, S.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Vesterinen, M.; Viaud, B.; Vieira, D.; Vilasis-Cardona, X.; Vollhardt, A.; Volyanskyy, D.; Voong, D.; Vorobyev, A.; Vorobyev, V.; Voß, C.; Voss, H.; Waldi, R.; Wallace, C.; Wallace, R.; Wandernoth, S.; Wang, J.; Ward, D. R.; Watson, N. K.; Webber, A. D.; Websdale, D.; Whitehead, M.; Wicht, J.; Wiechczynski, J.; Wiedner, D.; Wiggers, L.; Wilkinson, G.; Williams, M. P.; Williams, M.; Wilson, F. F.; Wimberley, J.; Wishahi, J.; Wislicki, W.; Witek, M.; Wotton, S. A.; Wright, S.; Wu, S.; Wyllie, K.; Xie, Y.; Xing, Z.; Yang, Z.; Young, R.; Yuan, X.; Yushchenko, O.; Zangoli, M.; Zavertyaev, M.; Zhang, F.; Zhang, L.; Zhang, W. C.; Zhang, Y.; Zhelezov, A.; Zhokhov, A.; Zhong, L.; Zvyagin, A.

    2013-11-01

    We present a measurement of form-factor-independent angular observables in the decay B0→K*(892)0μ+μ-. The analysis is based on a data sample corresponding to an integrated luminosity of 1.0fb-1, collected by the LHCb experiment in pp collisions at a center-of-mass energy of 7 TeV. Four observables are measured in six bins of the dimuon invariant mass squared q2 in the range 0.1

  14. Finite-Difference Modeling of Seismic Wave Scattering in 3D Heterogeneous Media: Generation of Tangential Motion from an Explosion Source

    NASA Astrophysics Data System (ADS)

    Hirakawa, E. T.; Pitarka, A.; Mellors, R. J.

    2015-12-01

    Evan Hirakawa, Arben Pitarka, and Robert Mellors One challenging task in explosion seismology is development of physical models for explaining the generation of S-waves during underground explosions. Pitarka et al. (2015) used finite difference simulations of SPE-3 (part of Source Physics Experiment, SPE, an ongoing series of underground chemical explosions at the Nevada National Security Site) and found that while a large component of shear motion was generated directly at the source, additional scattering from heterogeneous velocity structure and topography are necessary to better match the data. Large-scale features in the velocity model used in the SPE simulations are well constrained, however, small-scale heterogeneity is poorly constrained. In our study we used a stochastic representation of small-scale variability in order to produce additional high-frequency scattering. Two methods for generating the distributions of random scatterers are tested. The first is done in the spatial domain by essentially smoothing a set of random numbers over an ellipsoidal volume using a Gaussian weighting function. The second method consists of filtering a set of random numbers in the wavenumber domain to obtain a set of heterogeneities with a desired statistical distribution (Frankel and Clayton, 1986). This method is capable of generating distributions with either Gaussian or von Karman autocorrelation functions. The key parameters that affect scattering are the correlation length, the standard deviation of velocity for the heterogeneities, and the Hurst exponent, which is only present in the von Karman media. Overall, we find that shorter correlation lengths as well as higher standard deviations result in increased tangential motion in the frequency band of interest (0 - 10 Hz). This occurs partially through S-wave refraction, but mostly by P-S and Rg-S waves conversions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  15. Kurtosis, skewness, and non-Gaussian cosmological density perturbations

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1993-01-01

    Cosmological topological defects as well as some nonstandard inflation models can give rise to non-Gaussian density perturbations. Skewness and kurtosis are the third and fourth moments that measure the deviation of a distribution from a Gaussian. Measurement of these moments for the cosmological density field and for the microwave background temperature anisotropy can provide a test of the Gaussian nature of the primordial fluctuation spectrum. In the case of the density field, the importance of measuring the kurtosis is stressed since it will be preserved through the weakly nonlinear gravitational evolution epoch. Current constraints on skewness and kurtosis of primeval perturbations are obtained from the observed density contrast on small scales and from recent COBE observations of temperature anisotropies on large scales. It is also shown how, in principle, future microwave anisotropy experiments might be able to reveal the initial skewness and kurtosis. It is shown that present data argue that if the initial spectrum is adiabatic, then it is probably Gaussian, but non-Gaussian isocurvature fluctuations are still allowed, and these are what topological defects provide.

  16. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    PubMed

    Kawabata, Takeshi

    2018-07-01

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  17. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Bose, Benjamin; Koyama, Kazuya

    2017-08-01

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with <= 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpch <= s <= 180Mpc/h. Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.

  18. An experimental investigation of gas fuel injection with X-ray radiography

    DOE PAGES

    Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.; ...

    2017-04-21

    In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less

  19. Simulation and evaluation of phase noise for optical amplification using semiconductor optical amplifiers in DPSK applications

    NASA Astrophysics Data System (ADS)

    Hong, Wei; Huang, Dexiu; Zhang, Xinliang; Zhu, Guangxi

    2008-01-01

    A thorough simulation and evaluation of phase noise for optical amplification using semiconductor optical amplifier (SOA) is very important for predicting its performance in differential phase-shift keyed (DPSK) applications. In this paper, standard deviation and probability distribution of differential phase noise at the SOA output are obtained from the statistics of simulated differential phase noise. By using a full-wave model of SOA, the noise performance in the entire operation range can be investigated. It is shown that nonlinear phase noise substantially contributes to the total phase noise in case of a noisy signal amplified by a saturated SOA and the nonlinear contribution is larger with shorter SOA carrier lifetime. It is also shown that Gaussian distribution can be useful as a good approximation of the total differential phase noise statistics in the whole operation range. Power penalty due to differential phase noise is evaluated using a semi-analytical probability density function (PDF) of receiver noise. Obvious increase of power penalty at high signal input powers can be found for low input OSNR, which is due to both the large nonlinear differential phase noise and the dependence of BER vs. receiving power curvature on differential phase noise standard deviation.

  20. An experimental investigation of gas fuel injection with X-ray radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.

    In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less

  1. Hypo- and hyperglycemia in relation to the mean, standard deviation, coefficient of variation, and nature of the glucose distribution.

    PubMed

    Rodbard, David

    2012-10-01

    We describe a new approach to estimate the risks of hypo- and hyperglycemia based on the mean and SD of the glucose distribution using optional transformations of the glucose scale to achieve a more nearly symmetrical and Gaussian distribution, if necessary. We examine the correlation of risks of hypo- and hyperglycemia calculated using different glucose thresholds and the relationships of these risks to the mean glucose, SD, and percentage coefficient of variation (%CV). Using representative continuous glucose monitoring datasets, one can predict the risk of glucose values above or below any arbitrary threshold if the glucose distribution is Gaussian or can be transformed to be Gaussian. Symmetry and gaussianness can be tested objectively and used to optimize the transformation. The method performs well with excellent correlation of predicted and observed risks of hypo- or hyperglycemia for individual subjects by time of day or for a specified range of dates. One can compare observed and calculated risks of hypo- and hyperglycemia for a series of thresholds considering their uncertainties. Thresholds such as 80 mg/dL can be used as surrogates for thresholds such as 50 mg/dL. We observe a high correlation of risk of hypoglycemia with %CV and illustrate the theoretical basis for that relationship. One can estimate the historical risks of hypo- and hyperglycemia by time of day, date, day of the week, or range of dates, using any specified thresholds. Risks of hypoglycemia with one threshold (e.g., 80 mg/dL) can be used as an effective surrogate marker for hypoglycemia at other thresholds (e.g., 50 mg/dL). These estimates of risk can be useful in research studies and in the clinical care of patients with diabetes.

  2. A Gaussian method to improve work-of-breathing calculations.

    PubMed

    Petrini, M F; Evans, J N; Wall, M A; Norman, J R

    1995-01-01

    The work of breathing is a calculated index of pulmonary function in ventilated patients that may be useful in deciding when to wean and when to extubate. However, the accuracy of the calculated work of breathing of the patient (WOBp) can suffer from artifacts introduced by coughing, swallowing, and other non-breathing maneuvers. The WOBp in this case will include not only the usual work of inspiration, but also the work of performing these non-breathing maneuvers. The authors developed a method to objectively eliminate the calculated work of these movements from the work of breathing, based on fitting to a Gaussian curve the variable P, which is obtained from the difference between the esophageal pressure change and the airway pressure change during each breath. In spontaneously breathing adults the normal breaths fit the Gaussian curve, while breaths that contain non-breathing maneuvers do not. In this Gaussian breath-elimination method (GM), breaths that are two standard deviations from that mean obtained by the fit are eliminated. For normally breathing control adult subjects, GM had little effect on WOBp, reducing it from 0.49 to 0.47 J/L (n = 8), while there was a 40% reduction in the coefficient of variation. Non-breathing maneuvers were simulated by coughing, which increased WOBp to 0.88 (n = 6); with the GM correction, WOBp was 0.50 J/L, a value not significantly different from that of normal breathing. Occlusion also increased WOBp to 0.60 J/L, but GM-corrected WOBp was 0.51 J/L, a normal value. As predicted, doubling the respiratory rate did not change the WOBp before or after the GM correction.(ABSTRACT TRUNCATED AT 250 WORDS)

  3. Elegant Gaussian beams for enhanced optical manipulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alpmann, Christina, E-mail: c.alpmann@uni-muenster.de; Schöler, Christoph; Denz, Cornelia

    2015-06-15

    Generation of micro- and nanostructured complex light beams attains increasing impact in photonics and laser applications. In this contribution, we demonstrate the implementation and experimental realization of the relatively unknown, but highly versatile class of complex-valued Elegant Hermite- and Laguerre-Gaussian beams. These beams create higher trapping forces compared to standard Gaussian light fields due to their propagation changing properties. We demonstrate optical trapping and alignment of complex functional particles as nanocontainers with standard and Elegant Gaussian light beams. Elegant Gaussian beams will inspire manifold applications in optical manipulation, direct laser writing, or microscopy, where the design of the point-spread functionmore » is relevant.« less

  4. Fault Identification Based on Nlpca in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping; Zhang, Jinfang

    2012-07-01

    The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.

  5. Validation of Ozone Profiles Retrieved from SAGE III Limb Scatter Measurements

    NASA Technical Reports Server (NTRS)

    Rault, Didier F.; Taha, Ghassan

    2007-01-01

    Ozone profiles retrieved from Stratospheric Aerosol and Gas Experiment (SAGE III) limb scatter measurements are compared with correlative measurements made by occultation instruments (SAGE II, SAGE III and HALOE [Halogen Occultation Experiment]), a limb scatter instrument (Optical Spectrograph and InfraRed Imager System [OSIRIS]) and a series of ozonesondes and lidars, in order to ascertain the accuracy and precision of the SAGE III instrument in limb scatter mode. The measurement relative accuracy is found to be 5-10% from the tropopause to about 45km whereas the relative precision is found to be less than 10% from 20 to 38km. The main source of error is height registration uncertainty, which is found to be Gaussian with a standard deviation of about 350m.

  6. A neurophysiological explanation for biases in visual localization.

    PubMed

    Moreland, James C; Boynton, Geoffrey M

    2017-02-01

    Observers show small but systematic deviations from equal weighting of all elements when asked to localize the center of an array of dots. Counter-intuitively, with small numbers of dots drawn from a Gaussian distribution, this bias results in subjects overweighting the influence of outlier dots - inconsistent with traditional statistical estimators of central tendency. Here we show that this apparent statistical anomaly can be explained by the observation that outlier dots also lie in regions of lower dot density. Using a standard model of V1 processing, which includes spatial integration followed by a compressive static nonlinearity, we can successfully predict the finding that dots in less dense regions of an array have a relatively greater influence on the perceived center.

  7. Estimating the contribution of strong daily export events to total pollutant export from the United States in summer

    NASA Astrophysics Data System (ADS)

    Fang, Yuanyuan; Fiore, Arlene M.; Horowitz, Larry W.; Gnanadesikan, Anand; Levy, Hiram; Hu, Yongtao; Russell, Armistead G.

    2009-12-01

    While the export of pollutants from the United States exhibits notable variability from day to day and is often considered to be "episodic," the contribution of strong daily export events to total export has not been quantified. We use carbon monoxide (CO) as a tracer of anthropogenic pollutants in the Model of OZone And Related Tracers (MOZART) to estimate this contribution. We first identify the major export pathway from the United States to be through the northeast boundary (24-48°N along 67.5°W and 80-67.5°W along 48°N), and then analyze 15 summers of daily CO export fluxes through this boundary. These daily CO export fluxes have a nearly Gaussian distribution with a mean of 1100 Gg CO day-1 and a standard deviation of 490 Gg CO day-1. To focus on the synoptic variability, we define a "synoptic background" export flux equal to the 15 day moving average export flux and classify strong export days according to their fluxes relative to this background. As expected from Gaussian statistics, 16% of summer days are "strong export days," classified as those days when the CO export flux exceeds the synoptic background by one standard deviation or more. Strong export days contributes 25% to the total export, a value determined by the relative standard deviation of the CO flux distribution. Regressing the anomalies of the CO export flux through the northeast U.S. boundary relative to the synoptic background on the daily anomalies in the surface pressure field (also relative to a 15 day running mean) suggests that strong daily export fluxes are correlated with passages of midlatitude cyclones over the Gulf of Saint Lawrence. The associated cyclonic circulation and Warm Conveyor Belts (WCBs) that lift surface pollutants over the northeastern United States have been shown previously to be associated with long-range transport events. Comparison with observations from the 2004 INTEX-NA field campaign confirms that our model captures the observed enhancements in CO outflow and resolves the processes associated with cyclone passages on strong export days. "Moderate export days," defined as days when the CO flux through the northeast boundary exceeds the 15 day running mean by less than one standard deviation, represent an additional 34% of summer days and 40% of total export. These days are also associated with migratory midlatitude cyclones. The remaining 35% of total export occurs on "weak export days" (50% of summer days) when high pressure anomalies occur over the Gulf of Saint Lawrence. Our findings for summer also apply to spring, when the U.S. pollutant export is typically strongest, with similar contributions to total export and associated meteorology on strong, moderate and weak export days. Although cyclone passages are the primary driver for strong daily export events, export during days without cyclone passages also makes a considerable contribution to the total export and thereby to the global pollutant budget.

  8. Implication of observed cloud variability for parameterizations of microphysical and radiative transfer processes in climate models

    NASA Astrophysics Data System (ADS)

    Huang, D.; Liu, Y.

    2014-12-01

    The effects of subgrid cloud variability on grid-average microphysical rates and radiative fluxes are examined by use of long-term retrieval products at the Tropical West Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) Program. Four commonly used distribution functions, the truncated Gaussian, Gamma, lognormal, and Weibull distributions, are constrained to have the same mean and standard deviation as observed cloud liquid water content. The PDFs are then used to upscale relevant physical processes to obtain grid-average process rates. It is found that the truncated Gaussian representation results in up to 30% mean bias in autoconversion rate whereas the mean bias for the lognormal representation is about 10%. The Gamma and Weibull distribution function performs the best for the grid-average autoconversion rate with the mean relative bias less than 5%. For radiative fluxes, the lognormal and truncated Gaussian representations perform better than the Gamma and Weibull representations. The results show that the optimal choice of subgrid cloud distribution function depends on the nonlinearity of the process of interest and thus there is no single distribution function that works best for all parameterizations. Examination of the scale (window size) dependence of the mean bias indicates that the bias in grid-average process rates monotonically increases with increasing window sizes, suggesting the increasing importance of subgrid variability with increasing grid sizes.

  9. Robust Library Building for Autonomous Classification of Downhole Geophysical Logs Using Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Silversides, Katherine L.; Melkumyan, Arman

    2017-03-01

    Machine learning techniques such as Gaussian Processes can be used to identify stratigraphically important features in geophysical logs. The marker shales in the banded iron formation hosted iron ore deposits of the Hamersley Ranges, Western Australia, form distinctive signatures in the natural gamma logs. The identification of these marker shales is important for stratigraphic identification of unit boundaries for the geological modelling of the deposit. Machine learning techniques each have different unique properties that will impact the results. For Gaussian Processes (GPs), the output values are inclined towards the mean value, particularly when there is not sufficient information in the library. The impact that these inclinations have on the classification can vary depending on the parameter values selected by the user. Therefore, when applying machine learning techniques, care must be taken to fit the technique to the problem correctly. This study focuses on optimising the settings and choices for training a GPs system to identify a specific marker shale. We show that the final results converge even when different, but equally valid starting libraries are used for the training. To analyse the impact on feature identification, GP models were trained so that the output was inclined towards a positive, neutral or negative output. For this type of classification, the best results were when the pull was towards a negative output. We also show that the GP output can be adjusted by using a standard deviation coefficient that changes the balance between certainty and accuracy in the results.

  10. Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.

    2018-04-01

    Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.

  11. Fuzzy C-means classification for corrosion evolution of steel images

    NASA Astrophysics Data System (ADS)

    Trujillo, Maite; Sadki, Mustapha

    2004-05-01

    An unavoidable problem of metal structures is their exposure to rust degradation during their operational life. Thus, the surfaces need to be assessed in order to avoid potential catastrophes. There is considerable interest in the use of patch repair strategies which minimize the project costs. However, to operate such strategies with confidence in the long useful life of the repair, it is essential that the condition of the existing coatings and the steel substrate can be accurately quantified and classified. This paper describes the application of fuzzy set theory for steel surfaces classification according to the steel rust time. We propose a semi-automatic technique to obtain image clustering using the Fuzzy C-means (FCM) algorithm and we analyze two kinds of data to study the classification performance. Firstly, we investigate the use of raw images" pixels without any pre-processing methods and neighborhood pixels. Secondly, we apply Gaussian noise to the images with different standard deviation to study the FCM method tolerance to Gaussian noise. The noisy images simulate the possible perturbations of the images due to the weather or rust deposits in the steel surfaces during typical on-site acquisition procedures

  12. A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction

    NASA Technical Reports Server (NTRS)

    Smith, Samantha A.; DelGenio, Anthony D.

    1998-01-01

    A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.

  13. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  14. Chemical Source Inversion using Assimilated Constituent Observations in an Idealized Two-dimensional System

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin

    2009-01-01

    We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.

  15. Consistency relations for sharp inflationary non-Gaussian features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris

    If cosmic inflation suffered tiny time-dependent deviations from the slow-roll regime, these would induce the existence of small scale-dependent features imprinted in the primordial spectra, with their shapes and sizes revealing information about the physics that produced them. Small sharp features could be suppressed at the level of the two-point correlation function, making them undetectable in the power spectrum, but could be amplified at the level of the three-point correlation function, offering us a window of opportunity to uncover them in the non-Gaussian bispectrum. In this article, we show that sharp features may be analyzed using only data coming frommore » the three point correlation function parametrizing primordial non-Gaussianity. More precisely, we show that if features appear in a particular non-Gaussian triangle configuration (e.g. equilateral, folded, squeezed), these must reappear in every other configuration according to a specific relation allowing us to correlate features across the non-Gaussian bispectrum. As a result, we offer a method to study scale-dependent features generated during inflation that depends only on data coming from measurements of non-Gaussianity, allowing us to omit data from the power spectrum.« less

  16. Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis

    NASA Technical Reports Server (NTRS)

    Ghrist, Richard W.; Plakalovic, Dragan

    2012-01-01

    An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.

  17. Simple reaction time in 8-9-year old children environmentally exposed to PCBs.

    PubMed

    Šovčíková, Eva; Wimmerová, Soňa; Strémy, Maximilián; Kotianová, Janette; Loffredo, Christopher A; Murínová, Ľubica Palkovičová; Chovancová, Jana; Čonka, Kamil; Lancz, Kinga; Trnovec, Tomáš

    2015-12-01

    Simple reaction time (SRT) has been studied in children exposed to polychlorinated biphenyls (PCBs), with variable results. In the current work we examined SRT in 146 boys and 161 girls, aged 8.53 ± 0.65 years (mean ± SD), exposed to PCBs in the environment of eastern Slovakia. We divided the children into tertiles with regard to increasing PCB serum concentration. The mean ± SEM serum concentration of the sum of 15 PCB congeners was 191.15 ± 5.39, 419.23 ± 8.47, and 1315.12 ± 92.57 ng/g lipids in children of the first, second, and third tertiles, respectively. We created probability distribution plots for each child from their multiple trials of the SRT testing. We fitted response time distributions from all valid trials with the ex-Gaussian function, a convolution of a normal and an additional exponential function, providing estimates of three independent parameters μ, σ, and τ. μ is the mean of the normal component, σ is the standard deviation of the normal component, and τ is the mean of the exponential component. Group response time distributions were calculated using the Vincent averaging technique. A Q-Q plot comparing probability distribution of the first vs. third tertile indicated that deviation of the quantiles of the latter tertile from those of the former begins at the 40th percentile and does not show a positive acceleration. This was confirmed in comparison of the ex-Gaussian parameters of these two tertiles adjusted for sex, age, Raven IQ of the child, mother's and father's education, behavior at home and school, and BMI: the results showed that the parameters μ and τ significantly (p ≤ 0.05) increased with PCB exposure. Similar increases of the ex-Gaussian parameter τ in children suffering from ADHD have been previously reported and interpreted as intermittent attentional lapses, but were not seen in our cohort. Our study has confirmed that environmental exposure of children to PCBs is associated with prolongation of simple reaction time reflecting impairment of cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Theoretical analysis of non-Gaussian heterogeneity effects on subsurface flow and transport

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Guadagnini, Alberto; Neuman, Shlomo P.

    2017-04-01

    Much of the stochastic groundwater literature is devoted to the analysis of flow and transport in Gaussian or multi-Gaussian log hydraulic conductivity (or transmissivity) fields, Y(x)=ln\\func K(x) (x being a position vector), characterized by one or (less frequently) a multiplicity of spatial correlation scales. Yet Y and many other variables and their (spatial or temporal) increments, ΔY, are known to be generally non-Gaussian. One common manifestation of non-Gaussianity is that whereas frequency distributions of Y often exhibit mild peaks and light tails, those of increments ΔY are generally symmetric with peaks that grow sharper, and tails that become heavier, as separation scale or lag between pairs of Y values decreases. A statistical model that captures these disparate, scale-dependent distributions of Y and ΔY in a unified and consistent manner has been recently proposed by us. This new "generalized sub-Gaussian (GSG)" model has the form Y(x)=U(x)G(x) where G(x) is (generally, but not necessarily) a multiscale Gaussian random field and U(x) is a nonnegative subordinator independent of G. The purpose of this paper is to explore analytically, in an elementary manner, lead-order effects that non-Gaussian heterogeneity described by the GSG model have on the stochastic description of flow and transport. Recognizing that perturbation expansion of hydraulic conductivity K=eY diverges when Y is sub-Gaussian, we render the expansion convergent by truncating Y's domain of definition. We then demonstrate theoretically and illustrate by way of numerical examples that, as the domain of truncation expands, (a) the variance of truncated Y (denoted by Yt) approaches that of Y and (b) the pdf (and thereby moments) of Yt increments approach those of Y increments and, as a consequence, the variogram of Yt approaches that of Y. This in turn guarantees that perturbing Kt=etY to second order in σYt (the standard deviation of Yt) yields results which approach those we obtain upon perturbing K=eY to second order in σY even as the corresponding series diverges. Our analysis is rendered mathematically tractable by considering mean-uniform steady state flow in an unbounded, two-dimensional domain of mildly heterogeneous Y with a single-scale function G having an isotropic exponential covariance. Results consist of expressions for (a) lead-order autocovariance and cross-covariance functions of hydraulic head, velocity, and advective particle displacement and (b) analogues of preasymptotic as well as asymptotic Fickian dispersion coefficients. We compare these theoretically and graphically with corresponding expressions developed in the literature for Gaussian Y. We find the former to differ from the latter by a factor k = /2 ( <> denoting ensemble expectation) and the GSG covariance of longitudinal velocity to contain an additional nugget term depending on this same factor. In the limit as Y becomes Gaussian, k reduces to one and the nugget term drops out.

  19. Exploring conservative islands using correlated and uncorrelated noise

    NASA Astrophysics Data System (ADS)

    da Silva, Rafael M.; Manchein, Cesar; Beims, Marcus W.

    2018-02-01

    In this work, noise is used to analyze the penetration of regular islands in conservative dynamical systems. For this purpose we use the standard map choosing nonlinearity parameters for which a mixed phase space is present. The random variable which simulates noise assumes three distributions, namely equally distributed, normal or Gaussian, and power law (obtained from the same standard map but for other parameters). To investigate the penetration process and explore distinct dynamical behaviors which may occur, we use recurrence time statistics (RTS), Lyapunov exponents and the occupation rate of the phase space. Our main findings are as follows: (i) the standard deviations of the distributions are the most relevant quantity to induce the penetration; (ii) the penetration of islands induce power-law decays in the RTS as a consequence of enhanced trapping; (iii) for the power-law correlated noise an algebraic decay of the RTS is observed, even though sticky motion is absent; and (iv) although strong noise intensities induce an ergodic-like behavior with exponential decays of RTS, the largest Lyapunov exponent is reminiscent of the regular islands.

  20. Elegant Ince-Gaussian beams in a quadratic-index medium

    NASA Astrophysics Data System (ADS)

    Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi

    2011-09-01

    Elegant Ince—Gaussian beams, which are the exact solutions of the paraxial wave equation in a quadratic-index medium, are derived in elliptical coordinates. These kinds of beams are the alternative form of standard Ince—Gaussian beams and they display better symmetry between the Ince-polynomials and the Gaussian function in mathematics. The transverse intensity distribution and the phase of the elegant Ince—Gaussian beams are discussed.

  1. Multiple scattering and the density distribution of a Cs MOT.

    PubMed

    Overstreet, K; Zabawa, P; Tallant, J; Schwettmann, A; Shaffer, J

    2005-11-28

    Multiple scattering is studied in a Cs magneto-optical trap (MOT). We use two Abel inversion algorithms to recover density distributions of the MOT from fluorescence images. Deviations of the density distribution from a Gaussian are attributed to multiple scattering.

  2. Evaluation of non-Gaussian diffusion in cardiac MRI.

    PubMed

    McClymont, Darryl; Teh, Irvin; Carruth, Eric; Omens, Jeffrey; McCulloch, Andrew; Whittington, Hannah J; Kohl, Peter; Grau, Vicente; Schneider, Jürgen E

    2017-09-01

    The diffusion tensor model assumes Gaussian diffusion and is widely applied in cardiac diffusion MRI. However, diffusion in biological tissue deviates from a Gaussian profile as a result of hindrance and restriction from cell and tissue microstructure, and may be quantified better by non-Gaussian modeling. The aim of this study was to investigate non-Gaussian diffusion in healthy and hypertrophic hearts. Thirteen rat hearts (five healthy, four sham, four hypertrophic) were imaged ex vivo. Diffusion-weighted images were acquired at b-values up to 10,000 s/mm 2 . Models of diffusion were fit to the data and ranked based on the Akaike information criterion. The diffusion tensor was ranked best at b-values up to 2000 s/mm 2 but reflected the signal poorly in the high b-value regime, in which the best model was a non-Gaussian "beta distribution" model. Although there was considerable overlap in apparent diffusivities between the healthy, sham, and hypertrophic hearts, diffusion kurtosis and skewness in the hypertrophic hearts were more than 20% higher in the sheetlet and sheetlet-normal directions. Non-Gaussian diffusion models have a higher sensitivity for the detection of hypertrophy compared with the Gaussian model. In particular, diffusion kurtosis may serve as a useful biomarker for characterization of disease and remodeling in the heart. Magn Reson Med 78:1174-1186, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  3. Measurement of optical-beat frequency in a photoconductive terahertz-wave generator using microwave higher harmonics.

    PubMed

    Murasawa, Kengo; Sato, Koki; Hidaka, Takehiko

    2011-05-01

    A new method for measuring optical-beat frequencies in the terahertz (THz) region using microwave higher harmonics is presented. A microwave signal was applied to the antenna gap of a photoconductive (PC) device emitting a continuous electromagnetic wave at about 1 THz by the photomixing technique. The microwave higher harmonics with THz frequencies are generated in the PC device owing to the nonlinearity of the biased photoconductance, which is briefly described in this article. Thirteen nearly periodic peaks in the photocurrent were observed when the microwave was swept from 16 to 20 GHz at a power of -48 dBm. The nearly periodic peaks are generated by the homodyne detection of the optical beat with the microwave higher harmonics when the frequency of the harmonics coincides with the optical-beat frequency. Each peak frequency and its peak width were determined by fitting a Gaussian function, and the order of microwave harmonics was determined using a coarse (i.e., lower resolution) measurement of the optical-beat frequency. By applying the Kalman algorithm to the peak frequencies of the higher harmonics and their standard deviations, the optical-beat frequency near 1 THz was estimated to be 1029.81 GHz with the standard deviation of 0.82 GHz. The proposed method is applicable to a conventional THz-wave generator with a photomixer.

  4. The influence of outliers on results of wet deposition measurements as a function of measurement strategy

    NASA Astrophysics Data System (ADS)

    Slanina, J.; Möls, J. J.; Baard, J. H.

    The results of a wet deposition monitoring experiment, carried out by eight identical wet-only precipitation samplers operating on the basis of 24 h samples, have been used to investigate the accuracy and uncertainties in wet deposition measurements. The experiment was conducted near Lelystad, The Netherlands over the period 1 March 1983-31 December 1985. By rearranging the data for one to eight samplers and sampling periods of 1 day to 1 month both systematic and random errors were investigated as a function of measuring strategy. A Gaussian distribution of the results was observed. Outliers, detected by a Dixon test ( a = 0.05) influenced strongly both the yearly averaged results and the standard deviation of this average as a function of the number of samplers and the length of the sampling period. The systematic bias in bulk elements, using one sampler, varies typically from 2 to 20% and for trace elements from 10 to 500%, respectively. Severe problems are encountered in the case of Zn, Cu, Cr, Ni and especially Cd. For the sensitive detection of trends generally more than one sampler per measuring station is necessary as the standard deviation in the yearly averaged wet deposition is typically 10-20% relative for one sampler. Using three identical samplers, trends of, e.g. 3% per year will be generally detected in 6 years.

  5. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model whichmore » is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with ≤ 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpc h ≤ s ≤ 180Mpc/ h . Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.« less

  6. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-07-01

    We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  7. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-04-01

    We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  8. Statistical Characteristics of the Gaussian-Noise Spikes Exceeding the Specified Threshold as Applied to Discharges in a Thundercloud

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.

    2017-12-01

    We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.

  9. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  10. Rapid automatized naming (RAN) in children with ADHD: An ex-Gaussian analysis.

    PubMed

    Ryan, Matthew; Jacobson, Lisa A; Hague, Cole; Bellows, Alison; Denckla, Martha B; Mahone, E Mark

    2017-07-01

    Children with ADHD demonstrate increased frequent "lapses" in performance on tasks in which the stimulus presentation rate is externally controlled, leading to increased variability in response times. It is less clear whether these lapses are also evident during performance on self-paced tasks, e.g., rapid automatized naming (RAN), or whether RAN inter-item pause time variability uniquely predicts reading performance. A total of 80 children aged 9 to 14 years-45 children with attention-deficit/hyperactivity disorder (ADHD) and 35 typically developing (TD) children-completed RAN and reading fluency measures. RAN responses were digitally recorded for analyses. Inter-stimulus pause time distributions (excluding between-row pauses) were analyzed using traditional (mean, standard deviation [SD], coefficient of variation [CV]) and ex-Gaussian (mu, sigma, tau) methods. Children with ADHD were found to be significantly slower than TD children (p < .05) on RAN letter naming mean response time as well as on oral and silent reading fluency. RAN response time distributions were also significantly more variable (SD, tau) in children with ADHD. Hierarchical regression revealed that the exponential component (tau) of the letter-naming response time distribution uniquely predicted reading fluency in children with ADHD (p < .001, ΔR 2  = .16), even after controlling for IQ, basic reading, ADHD symptom severity and age. The findings suggest that children with ADHD (without word-level reading difficulties) manifest slowed performance on tasks of reading fluency; however, this "slowing" may be due in part to lapses from ongoing performance that can be assessed directly using ex-Gaussian methods that capture excessively long response times.

  11. Problems with Using the Normal Distribution – and Ways to Improve Quality and Efficiency of Data Analysis

    PubMed Central

    Limpert, Eckhard; Stahel, Werner A.

    2011-01-01

    Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325

  12. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    PubMed

    Limpert, Eckhard; Stahel, Werner A

    2011-01-01

    The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  13. SU-E-T-299: Small Fields Profiles Correction Through Detectors Spatial Response Functions and Field Size Dependence Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filipuzzi, M; Garrigo, E; Venencia, C

    2014-06-01

    Purpose: To calculate the spatial response function of various radiation detectors, to evaluate the dependence on the field size and to analyze the small fields profiles corrections by deconvolution techniques. Methods: Crossline profiles were measured on a Novalis Tx 6MV beam with a HDMLC. The configuration setup was SSD=100cm and depth=5cm. Five fields were studied (200×200mm2,100×100mm2, 20×20mm2, 10×10mm2and 5×5mm2) and measured were made with passive detectors (EBT3 radiochromic films and TLD700 thermoluminescent detectors), ionization chambers (PTW30013, PTW31003, CC04 and PTW31016) and diodes (PTW60012 and IBA SFD). The results of passive detectors were adopted as the actual beam profile. To calculatemore » the detectors kernels, modeled by Gaussian functions, an iterative process based on a least squares criterion was used. The deconvolutions of the measured profiles were calculated with the Richardson-Lucy method. Results: The profiles of the passive detectors corresponded with a difference in the penumbra less than 0.1mm. Both diodes resolve the profiles with an overestimation of the penumbra smaller than 0.2mm. For the other detectors, response functions were calculated and resulted in Gaussian functions with a standard deviation approximate to the radius of the detector in study (with a variation less than 3%). The corrected profiles resolve the penumbra with less than 1% error. Major discrepancies were observed for cases in extreme conditions (PTW31003 and 5×5mm2 field size). Conclusion: This work concludes that the response function of a radiation detector is independent on the field size, even for small radiation beams. The profiles correction, using deconvolution techniques and response functions of standard deviation equal to the radius of the detector, gives penumbra values with less than 1% difference to the real profile. The implementation of this technique allows estimating the real profile, freeing from the effects of the detector used for the acquisition.« less

  14. Establishing a baseline phase behavior in magnetic resonance imaging to determine normal vs. abnormal iron content in the brain.

    PubMed

    Haacke, E Mark; Ayaz, Muhammad; Khan, Asadullah; Manova, Elena S; Krishnamurthy, Bharani; Gollapalli, Lakshman; Ciulla, Carlo; Kim, I; Petersen, Floyd; Kirsch, Wolff

    2007-08-01

    To establish a baseline of phase differences between tissues in a number of regions of the human brain as a means of detecting iron abnormalities using magnetic resonance imaging (MRI). A fully flow-compensated, three-dimensional (3D), high-resolution, gradient-echo (GRE) susceptibility-weighted imaging (SWI) sequence was used to collect magnitude and phase data at 1.5 T. The phase images were high-pass-filtered and processed region by region with hand-drawn areas. The regions evaluated included the motor cortex (MC), putamen (PUT), globus pallidus (GP), caudate nucleus (CN), substantia nigra (SN), and red nucleus (RN). A total of 75 subjects, ranging in age from 55 to 89 years, were analyzed. The phase was found to have a Gaussian-like distribution with a standard deviation (SD) of 0.046 radians on a pixel-by-pixel basis. Most regions of interest (ROIs) contained at least 100 pixels, giving a standard error of the mean (SEM) of 0.0046 radians or less. In the MC, phase differences were found to be roughly 0.273 radians between CSF and gray matter (GM), and 0.083 radians between CSF and white matter (WM). The difference between CSF and the GP was 0.201 radians, and between CSF and the CN (head) it was 0.213 radians. For CSF and the PUT (the lower outer part) the difference was 0.449 radians, and between CSF and the RN (third slice vascularized region) it was 0.353 radians. Finally, the phase difference between CSF and SN was 0.345 radians. The Gaussian-like distributions in phase make it possible to predict deviations from normal phase behavior for tissues in the brain. Using phase as an iron marker may be useful for studying absorption of iron in diseases such as Parkinson's, Huntington's, neurodegeneration with brain iron accumulation (NBIA), Alzheimer's, and multiple sclerosis (MS), and other iron-related diseases. The phases quoted here will serve as a baseline for future studies that look for changes in iron content. (c) 2007 Wiley-Liss, Inc.

  15. SU-E-T-327: Dosimetric Impact of Beam Energy for Intrabeam Breast IORT with Different Residual Cancer Cell Distributions After Surgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwid, M; Zhang, H

    Purpose: The purpose of this study was to evaluate the dosimetric impact of beam energy to the IORT treatment of residual cancer cells with different cancer cell distributions after breast-conserving surgery. Methods: The three dimensional (3D) radiation doses of IORT using a 4-cm spherical applicator at the energy of 40 keV and 50 keV were separately calculated at different depths of the postsurgical tumor bed. The modified linear quadratic model (MLQ) was used to estimate the radiobiological response of the tumor cells assuming different radio-sensitivities and density distributions. The impact of radiation was evaluated for two types of breast cancermore » cell lines (α /β=10, and α /β =3.8) at 20 Gy dose prescribed at the applicator surface. Cancer cell distributions in the postsurgical tissue field were assumed to be a Gaussian with the standard deviations of 0.5, 1 and 2 mm respectively, namely the cancer cell infiltrations of 1.5, 3, and 6 mm respectively. The surface cancer cell percentage was assumed to be 0.01%, 0.1%, 1% and 10% separately. The equivalent uniform doses (EUD) for all the scenarios were calculated. Results: The EUDs were found to be dependent on the distributions of cancer cells, but independent of the cancer cell radio-sensitivities and the density at the surface. EUDs of 50 keV are 1% larger than that of 40 keV. For a prescription dose of 20 Gy, EUDs of 50 keV beam are 17.52, 16.21 and 13.14 Gy respectively for 0.5, 1.0 and 2.0 mm of the standard deviation of cancer cell Gaussian distributions. Conclusion: The impact by selected energies of IORT beams is very minimal. When energy is changed from 50 keV to 40 keV, the EUDs are almost the same for the same cancer cell distribution. 40 keV can be safely used as an alternative of 50 keV beam in IORT.« less

  16. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.

    PubMed

    Yokoyama, Jun'ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.

  17. Curvaton scenario within the minimal supersymmetric standard model and predictions for non-Gaussianity.

    PubMed

    Mazumdar, Anupam; Nadathur, Seshadri

    2012-03-16

    We provide a model in which both the inflaton and the curvaton are obtained from within the minimal supersymmetric standard model, with known gauge and Yukawa interactions. Since now both the inflaton and curvaton fields are successfully embedded within the same sector, their decay products thermalize very quickly before the electroweak scale. This results in two important features of the model: first, there will be no residual isocurvature perturbations, and second, observable non-Gaussianities can be generated with the non-Gaussianity parameter f(NL)~O(5-1000) being determined solely by the combination of weak-scale physics and the standard model Yukawa interactions.

  18. The 1997 North American Interagency Intercomparison of Ultraviolet Spectroradiometers Including Narrowband Filter Radiometers

    PubMed Central

    Lantz, Kathleen; Disterhoft, Patrick; Early, Edward; Thompson, Ambler; DeLuisi, John; Berndt, Jerry; Harrison, Lee; Kiedron, Peter; Ehramjian, James; Bernhard, Germar; Cabasug, Lauriana; Robertson, James; Mou, Wanfeng; Taylor, Thomas; Slusser, James; Bigelow, David; Durham, Bill; Janson, George; Hayes, Douglass; Beaubien, Mark; Beaubien, Arthur

    2002-01-01

    The fourth North American Intercomparison of Ultraviolet Monitoring Spectroradiometers was held September 15 to 25, 1997 at Table Mountain outside of Boulder, Colorado, USA. Concern over stratospheric ozone depletion has prompted several government agencies in North America to establish networks of spectroradiometers for monitoring solar ultraviolet irradiance at the surface of the Earth. The main purpose of the Intercomparison was to assess the ability of spectroradiometers to accurately measure solar ultraviolet irradiance, and to compare the results between instruments of different monitoring networks. This Intercomparison was coordinated by NIST and NOAA, and included participants from the ASRC, EPA, NIST, NSF, SERC, USDA, and YES. The UV measuring instruments included scanning spectroradiometers, spectrographs, narrow band multi-filter radiometers, and broadband radiometers. Instruments were characterized for wavelength accuracy, bandwidth, stray-light rejection, and spectral irradiance responsivity. The spectral irradiance responsivity was determined two to three times outdoors to assess temporal stability. Synchronized spectral scans of the solar irradiance were performed over several days. Using the spectral irradiance responsivities determined with the NIST traceable standard lamp, and a simple convolution technique with a Gaussian slit-scattering function to account for the different bandwidths of the instruments, the measured solar irradiance from the spectroradiometers excluding the filter radiometers at 16.5 h UTC had a relative standard deviation of ±4 % for wavelengths greater than 305 nm. The relative standard deviation for the solar irradiance at 16.5 h UTC including the filter radiometer was ±4 % for filter functions above 300 nm. PMID:27446717

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    St James, S; Bloch, C; Saini, J

    Purpose: Proton pencil beam scanning is used clinically across the United States. There are no current guidelines on tolerances for daily QA specific to pencil beam scanning, specifically related to the individual spot properties (spot width). Using a stochastic method to determine tolerances has the potential to optimize tolerances on individual spots and decrease the number of false positive failures in daily QA. Individual and global spot tolerances were evaluated. Methods: As part of daily QA for proton pencil beam scanning, a field of 16 spots (corresponding to 8 energies) is measured using an array of ion chambers (Matrixx, IBA).more » Each individual spot is fit to two Gaussian functions (x,y). The spot width (σ) in × and y are recorded (32 parameters). Results from the daily QA were retrospectively analyzed for 100 days of data. The deviations of the spot widths were histogrammed and fit to a Gaussian function. The stochastic spot tolerance was taken to be the mean ± 3σ. Using these results, tolerances were developed and tested against known deviations in spot width. Results: The individual spot tolerances derived with the stochastic method decreased in 30/32 instances. Using the previous tolerances (± 20% width), the daily QA would have detected 0/20 days of the deviation. Using a tolerance of any 6 spots failing the stochastic tolerance, 18/20 days of the deviation would have been detected. Conclusion: Using a stochastic method we have been able to decrease daily tolerances on the spot widths for 30/32 spot widths measured. The stochastic tolerances can lead to detection of deviations that previously would have been picked up on monthly QA and missed by daily QA. This method could be easily extended for evaluation of other QA parameters in proton spot scanning.« less

  20. Statistical properties of effective drought index (EDI) for Seoul, Busan, Daegu, Mokpo in South Korea

    NASA Astrophysics Data System (ADS)

    Park, Jong-Hyeok; Kim, Ki-Beom; Chang, Heon-Young

    2014-08-01

    Time series of drought indices has been considered mostly in view of temporal and spatial distributions of a drought index so far. Here we investigate the statistical properties of a daily Effective Drought Index (EDI) itself for Seoul, Busan, Daegu, Mokpo for the period of 100 years from 1913 to 2012. We have found that both in dry and wet seasons the distribution of EDI as a function of EDI follows the Gaussian function. In dry season the shape of the Gaussian function is characteristically broader than that in wet seasons. The total number of drought days during the period we have analyzed is related both to the mean value and more importantly to the standard deviation. We have also found that according to the distribution of the number of occasions where the EDI values of several consecutive days are all less than a threshold, the distribution follows the exponential distribution. The slope of the best fit becomes steeper not only as the critical EDI value becomes more negative but also as the number of consecutive days increases. The slope of the exponential distribution becomes steeper as the number of the city in which EDI is simultaneously less than a critical EDI in a row increases. Finally, we conclude by pointing out implications of our findings.

  1. Comparison of three NDVI time-series fitting methods in crop phenology detection in Northeast China

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Tao, Fulu

    2014-03-01

    Phenological changes of cropland are the pivotal basis for farm management, agricultural production, and climate change research. Over the past decades, a range of methods have been used to extract phenological events based on satellite derived continuous vegetation index time series, however, large uncertainties still exist. In this study, three smoothing methods were compared to reduce the potential uncertainty and to quantify crop green-up dates over Northeast China. The results indicated that the crop spring onset dates estimated by three methods show variance in the dates, but with similar spatial pattern. In 60% of the study area, the standard deviation (SD) of the estimated starting date from different method is less than 10 days, while 39.5% of total pixels have SDs between 10days and 30 days. Through comparative analysis against the observation phenological data, we concluded that Asymmetric Gaussians produced the most approximative results of all, followed by Double Logistic algorithm, and Savizky-Glolay algorithm performed worst. The starting dates of crops occur mostly between May and June in this region. The Savitzky-Golay has the earliest estimates, while the Asymmetric Gaussians and Double logistic fitting method show similar and later estimates, which are more consistent with the observed data.

  2. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic

    PubMed Central

    YOKOYAMA, Jun’ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231

  3. Verification of unfold error estimates in the UFO code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have anmore » imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.« less

  4. Measurement of the difference in CP-violating asymmetries in D(0)→K(+)K(-) and D(0)→π(+)π(-) decays at CDF.

    PubMed

    Aaltonen, T; Álvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Bae, T; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Bisello, D; Bizjak, I; Bland, K R; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brigliadori, L; Bromberg, C; Brucken, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Calamba, A; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chung, W H; Chung, Y S; Ciocci, M A; Clark, A; Clarke, C; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; Dell'Orso, M; Demortier, L; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, M; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, S; Ershaidat, N; Eusebi, R; Farrington, S; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Funakoshi, Y; Furic, I; Gallinaro, M; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Grinstein, S; Grosso-Pilcher, C; Group, R C; Guimaraes da Costa, J; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kasmi, A; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kim, Y J; Kimura, N; Kirby, M; Klimenko, S; Knoepfel, K; Kondo, K; Kong, D J; Konigsberg, J; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; LeCompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Limosani, A; Lin, C-J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, H; Liu, Q; Liu, T; Lockwitz, S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Maestro, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Mastrandrea, P; Matera, K; Mattson, M E; Mazzacane, A; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Noh, S Y; Norniella, O; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Poprocki, S; Potamianos, K; Prokoshin, F; Pranko, A; Ptohos, F; Punzi, G; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Riddick, T; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Sakurai, Y; Santi, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shochet, M; Shreyber-Tecker, I; Simonenko, A; Sinervo, P; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Sorin, V; Song, H; Squillacioti, P; Stancari, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Ukegawa, F; Uozumi, S; Varganov, A; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vizán, J; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Wester, W C; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamato, D; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zhou, C; Zucchelli, S

    2012-09-14

    We report a measurement of the difference (ΔA(CP)) between time-integrated CP-violating asymmetries in D(0)→K(+)K(-) and D(0)→π(+)π(-) decays reconstructed in the full data set of proton-antiproton collisions collected by the Collider Detector at Fermilab, corresponding to 9.7  fb(-1) of integrated luminosity. The strong decay D(*+)→D(0)π(+) is used to identify the charm meson at production as D(0) or D[over ¯](0). We measure ΔA(CP)=[-0.62±0.21(stat)±0.10(syst)]%, which differs from zero by 2.7 Gaussian standard deviations. This result supports similar evidence for CP violation in charm-quark decays obtained in proton-proton collisions.

  5. Gravitational Effects on Closed-Cellular-Foam Microstructure

    NASA Technical Reports Server (NTRS)

    Noever, David A.; Cronise, Raymond J.; Wessling, Francis C.; McMannus, Samuel P.; Mathews, John; Patel, Darayas

    1996-01-01

    Polyurethane foam has been produced in low gravity for the first time. The cause and distribution of different void or pore sizes are elucidated from direct comparison of unit-gravity and low-gravity samples. Low gravity is found to increase the pore roundness by 17% and reduce the void size by 50%. The standard deviation for pores becomes narrower (a more homogeneous foam is produced) in low gravity. Both a Gaussian and a Weibull model fail to describe the statistical distribution of void areas, and hence the governing dynamics do not combine small voids in either a uniform or a dependent fashion to make larger voids. Instead, the void areas follow an exponential law, which effectively randomizes the production of void sizes in a nondependent fashion consistent more with single nucleation than with multiple or combining events.

  6. A study of atmospheric dispersion of radionuclides at a coastal site using a modified Gaussian model and a mesoscale sea breeze model

    NASA Astrophysics Data System (ADS)

    Venkatesan, R.; Mathiyarasu, R.; Somayaji, K. M.

    Ground level concentration and sky-shine dose due to radioactive emissions from a nuclear power plant at a coastal site have been estimated using the standard Gaussian Plume Model (GPM) and the modified GPM suggested by Misra (Atmospheric Environment 14 (1980) 397), which incorporates fumigation effect under sea breeze condition. The difference in results between these two models is analysed in order to understand their significance and errors that would occur if proper choice were not made. Radioactive sky-shine dose from 41Ar, emitted from a 100 m stack of the nuclear plant is continuously recorded by environmental gamma dose monitors and the data is used to validate the modified GPM. It is observed that the dose values increase by a factor of about 2 times than those of the standard GPM estimates, up to a downwind distance of 6 km during sea breeze hours. In order to examine the dispersion of radioactive effluents in the mesoscale range, a sea breeze model coupled with a particle dispersion model is used. The deposited activity, thyroid dose and sky-shine radioactive dose are simulated for a range of 30 km. In this range, the plume is found to deviate from its straight-line trajectory, as otherwise assumed in GPM. A secondary maximum in the concentration and the sky-shine dose is also observed in the model results. These results are quite significant in realistically estimating the area affected under any unlikely event of an accidental release of radioactivity.

  7. Age-standardisation when target setting and auditing performance of Down syndrome screening programmes.

    PubMed

    Cuckle, Howard; Aitken, David; Goodburn, Sandra; Senior, Brian; Spencer, Kevin; Standing, Sue

    2004-11-01

    To describe and illustrate a method of setting Down syndrome screening targets and auditing performance that allows for differences in the maternal age distribution. A reference population was determined from a Gaussian model of maternal age. Target detection and false-positive rates were determined by standard statistical modelling techniques, except that the reference population rather than an observed population was used. Second-trimester marker parameters were obtained for Down syndrome from a large meta-analysis, and for unaffected pregnancies from the combined results of more than 600,000 screens in five centres. Audited detection and false-positive rates were the weighted average of the rates in five broad age groups corrected for viability bias. Weights were based on the age distributions in the reference population. Maternal age was found to approximate reasonably well to a Gaussian distribution with mean 27 years and standard deviation 5.5 years. Depending on marker combination, the target detection rates were 59 to 64% and false-positive rate 4.2 to 5.4% for a 1 in 250 term cut-off; 65 to 68% and 6.1 to 7.3% for 1 in 270 at mid-trimester. Among the five centres, the audited detection rate ranged from 7% below target to 10% above target, with audited false-positive rates better than the target by 0.3 to 1.5%. Age-standardisation should help to improve screening quality by allowing for intrinsic differences between programmes, so that valid comparisons can be made. Copyright 2004 John Wiley & Sons, Ltd.

  8. Capacity of PPM on Gaussian and Webb Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Dolinar, S.; Pollara, F.; Hamkins, J.

    2000-01-01

    This paper computes and compares the capacities of M-ary PPM on various idealized channels that approximate the optical communication channel: (1) the standard additive white Gaussian noise (AWGN) channel;(2) a more general AWGN channel (AWGN2) allowing different variances in signal and noise slots;(3) a Webb-distributed channel (Webb2);(4) a Webb+Gaussian channel, modeling Gaussian thermal noise added to Webb-distributed channel outputs.

  9. Time-resolved measurements of statistics for a Nd:YAG laser.

    PubMed

    Hubschmid, W; Bombach, R; Gerber, T

    1994-08-20

    Time-resolved measurements of the fluctuating intensity of a multimode frequency-doubled Nd:YAG laser have been performed. For various operating conditions the enhancement factors in nonlinear optical processes that use a fluctuating instead of a single-mode laser have been determined up to the sixth order. In the case of reduced flash-lamp excitation and a switched-off laser amplifier, the intensity fluctuations agree with the normalized Gaussian model for the fluctuations of the fundamental frequency, whereas strong deviations are found under usual operating conditions. The frequencydoubled light has in the latter case enhancement factors not so far from values of Gaussian statistics.

  10. Non-Gaussianity in multi-sound-speed disformally coupled inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Bruck, Carsten van; Longden, Chris; Koivisto, Tomi, E-mail: C.vandeBruck@sheffield.ac.uk, E-mail: tomi.koivisto@nordita.org, E-mail: cjlongden1@sheffield.ac.uk

    Most, if not all, scalar-tensor theories are equivalent to General Relativity with a disformally coupled matter sector. In extra-dimensional theories such a coupling can be understood as a result of induction of the metric on a brane that matter is confined to. This article presents a first look at the non-Gaussianities in disformally coupled inflation, a simple two-field model that features a novel kinetic interaction. Cases with both canonical and Dirac-Born-Infeld (DBI) kinetic terms are taken into account, the latter motivated by the possible extra-dimensional origin of the disformality. The computations are carried out for the equilateral configuration in themore » slow-roll regime, wherein it is found that the non-Gaussianity is typically rather small and negative. This is despite the fact that the new kinetic interaction causes the perturbation modes to propagate with different sounds speeds, which may both significantly deviate from unity during inflation.« less

  11. Non-Gaussian power grid frequency fluctuations characterized by Lévy-stable laws and superstatistics

    NASA Astrophysics Data System (ADS)

    Schäfer, Benjamin; Beck, Christian; Aihara, Kazuyuki; Witthaut, Dirk; Timme, Marc

    2018-02-01

    Multiple types of fluctuations impact the collective dynamics of power grids and thus challenge their robust operation. Fluctuations result from processes as different as dynamically changing demands, energy trading and an increasing share of renewable power feed-in. Here we analyse principles underlying the dynamics and statistics of power grid frequency fluctuations. Considering frequency time series for a range of power grids, including grids in North America, Japan and Europe, we find a strong deviation from Gaussianity best described as Lévy-stable and q-Gaussian distributions. We present a coarse framework to analytically characterize the impact of arbitrary noise distributions, as well as a superstatistical approach that systematically interprets heavy tails and skewed distributions. We identify energy trading as a substantial contribution to today's frequency fluctuations and effective damping of the grid as a controlling factor enabling reduction of fluctuation risks, with enhanced effects for small power grids.

  12. Mode entanglement of Gaussian fermionic states

    NASA Astrophysics Data System (ADS)

    Spee, C.; Schwaiger, K.; Giedke, G.; Kraus, B.

    2018-04-01

    We investigate the entanglement of n -mode n -partite Gaussian fermionic states (GFS). First, we identify a reasonable definition of separability for GFS and derive a standard form for mixed states, to which any state can be mapped via Gaussian local unitaries (GLU). As the standard form is unique, two GFS are equivalent under GLU if and only if their standard forms coincide. Then, we investigate the important class of local operations assisted by classical communication (LOCC). These are central in entanglement theory as they allow one to partially order the entanglement contained in states. We show, however, that there are no nontrivial Gaussian LOCC (GLOCC) among pure n -partite (fully entangled) states. That is, any such GLOCC transformation can also be accomplished via GLU. To obtain further insight into the entanglement properties of such GFS, we investigate the richer class of Gaussian stochastic local operations assisted by classical communication (SLOCC). We characterize Gaussian SLOCC classes of pure n -mode n -partite states and derive them explicitly for few-mode states. Furthermore, we consider certain fermionic LOCC and show how to identify the maximally entangled set of pure n -mode n -partite GFS, i.e., the minimal set of states having the property that any other state can be obtained from one state inside this set via fermionic LOCC. We generalize these findings also to the pure m -mode n -partite (for m >n ) case.

  13. Realistic sampling of anisotropic correlogram parameters for conditional simulation of daily rainfields

    NASA Astrophysics Data System (ADS)

    Gyasi-Agyei, Yeboah

    2018-01-01

    This paper has established a link between the spatial structure of radar rainfall, which more robustly describes the spatial structure, and gauge rainfall for improved daily rainfield simulation conditioned on the limited gauged data for regions with or without radar records. A two-dimensional anisotropic exponential function that has parameters of major and minor axes lengths, and direction, is used to describe the correlogram (spatial structure) of daily rainfall in the Gaussian domain. The link is a copula-based joint distribution of the radar-derived correlogram parameters that uses the gauge-derived correlogram parameters and maximum daily temperature as covariates of the Box-Cox power exponential margins and Gumbel copula. While the gauge-derived, radar-derived and the copula-derived correlogram parameters reproduced the mean estimates similarly using leave-one-out cross-validation of ordinary kriging, the gauge-derived parameters yielded higher standard deviation (SD) of the Gaussian quantile which reflects uncertainty in over 90% of cases. However, the distribution of the SD generated by the radar-derived and the copula-derived parameters could not be distinguished. For the validation case, the percentage of cases of higher SD by the gauge-derived parameter sets decreased to 81.2% and 86.6% for the non-calibration and the calibration periods, respectively. It has been observed that 1% reduction in the Gaussian quantile SD can cause over 39% reduction in the SD of the median rainfall estimate, actual reduction being dependent on the distribution of rainfall of the day. Hence the main advantage of using the most correct radar correlogram parameters is to reduce the uncertainty associated with conditional simulations that rely on SD through kriging.

  14. Generalized Ince Gaussian beams

    NASA Astrophysics Data System (ADS)

    Bandres, Miguel A.; Gutiérrez-Vega, Julio C.

    2006-08-01

    In this work we present a detailed analysis of the tree families of generalized Gaussian beams, which are the generalized Hermite, Laguerre, and Ince Gaussian beams. The generalized Gaussian beams are not the solution of a Hermitian operator at an arbitrary z plane. We derived the adjoint operator and the adjoint eigenfunctions. Each family of generalized Gaussian beams forms a complete biorthonormal set with their adjoint eigenfunctions, therefore, any paraxial field can be described as a superposition of a generalized family with the appropriate weighting and phase factors. Each family of generalized Gaussian beams includes the standard and elegant corresponding families as particular cases when the parameters of the generalized families are chosen properly. The generalized Hermite Gaussian and Laguerre Gaussian beams correspond to limiting cases of the generalized Ince Gaussian beams when the ellipticity parameter of the latter tends to infinity or to zero, respectively. The expansion formulas among the three generalized families and their Fourier transforms are also presented.

  15. ENSO's non-stationary and non-Gaussian character: the role of climate shifts

    NASA Astrophysics Data System (ADS)

    Boucharel, J.; Dewitte, B.; Garel, B.; Du Penhoat, Y.

    2009-07-01

    El Niño Southern Oscillation (ENSO) is the dominant mode of climate variability in the Pacific, having socio-economic impacts on surrounding regions. ENSO exhibits significant modulation on decadal to inter-decadal time scales which is related to changes in its characteristics (onset, amplitude, frequency, propagation, and predictability). Some of these characteristics tend to be overlooked in ENSO studies, such as its asymmetry (the number and amplitude of warm and cold events are not equal) and the deviation of its statistics from those of the Gaussian distribution. These properties could be related to the ability of the current generation of coupled models to predict ENSO and its modulation. Here, ENSO's non-Gaussian nature and asymmetry are diagnosed from in situ data and a variety of models (from intermediate complexity models to full-physics coupled general circulation models (CGCMs)) using robust statistical tools initially designed for financial mathematics studies. In particular α-stable laws are used as theoretical background material to measure (and quantify) the non-Gaussian character of ENSO time series and to estimate the skill of ``naïve'' statistical models in producing deviation from Gaussian laws and asymmetry. The former are based on non-stationary processes dominated by abrupt changes in mean state and empirical variance. It is shown that the α-stable character of ENSO may result from the presence of climate shifts in the time series. Also, cool (warm) periods are associated with ENSO statistics having a stronger (weaker) tendency towards Gaussianity and lower (greater) asymmetry. This supports the hypothesis of ENSO being rectified by changes in mean state through nonlinear processes. The relationship between changes in mean state and nonlinearity (skewness) is further investigated both in the Zebiak and Cane (1987)'s model and the models of the Intergovernmental Panel for Climate Change (IPCC). Whereas there is a clear relationship in all models between ENSO asymmetry (as measured by skewness or nonlinear advection) and changes in mean state, they exhibit a variety of behaviour with regard to α-stability. This suggests that the dynamics associated with climate shifts and the occurrence of extreme events involve higher-order statistical moments that cannot be accounted for solely by nonlinear advection.

  16. Study on the propagation properties of laser in aerosol based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Leng, Kun; Wu, Wenyuan; Zhang, Xi; Gong, Yanchun; Yang, Yuntao

    2018-02-01

    When laser propagate in the atmosphere, due to aerosol scattering and absorption, laser energy will continue to decline, affecting the effectiveness of the laser effect. Based on the Monte Carlo method, the relationship between the photon spatial energy distributions of the laser wavelengths of 10.6μm in marine, sand-type, water-soluble and soot aerosols ,and the propagation distance, visibility and the divergence angle were studied. The results show that for 10.6μm laser, the maximum number of attenuation of photons arriving at the receiving plane is sand-type aerosol, the minimal attenuation is water soluble aerosol; as the propagation distance increases, the number of photons arriving at the receiving plane decreases; as the visibility increases, the number of photons arriving at the receiving plane increases rapidly and then stabilizes; in the above cases, the photon energy distribution does not deviated from the Gaussian distribution; as the divergence angle increases, the number of photons arriving at the receiving plane is almost unchanged, but the photon energy distribution gradually deviates from the Gaussian distribution.

  17. Resistance Training Increases the Variability of Strength Test Scores

    DTIC Science & Technology

    2009-06-08

    standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard

  18. Estimate of standard deviation for a log-transformed variable using arithmetic means and standard deviations.

    PubMed

    Quan, Hui; Zhang, Ji

    2003-09-15

    Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.

  19. Numerical investigation of the effect of net charge injection on the electric field deviation in a TE CO2 laser

    NASA Astrophysics Data System (ADS)

    Jahanianl, Nahid; Aram, Majid; Morshedian, Nader; Mehramiz, Ahmad

    2018-03-01

    In this report, the distribution of and deviation in the electric field were investigated in the active medium of a TE CO2 laser. The variation in the electric field is due to injection of net electron and proton charges as a plasma generator. The charged-particles beam density is assumed to be Gaussian. The electric potential and electric field distribution were simulated by solving Poisson’s equation using the SOR numerical method. The minimum deviation of the electric field obtained was about 2.2% and 6% for the electrons and protons beams, respectively, for a charged-particles beam-density of 106 cm-3. This result was obtained for a system geometry ensuring a mean-free-path of the particles beam of 15 mm. It was also found that the field deviation increases for a the mean-free-path smaller than that or larger than 25 mm. Moreover, the electric field deviation decreases when the electrons beam density exceeds 106 cm-3.

  20. Reliability evaluation of high-performance, low-power FinFET standard cells based on mixed RBB/FBB technique

    NASA Astrophysics Data System (ADS)

    Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole

    2017-04-01

    With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).

  1. On the variability of the Priestley-Taylor coefficient over water bodies

    NASA Astrophysics Data System (ADS)

    Assouline, Shmuel; Li, Dan; Tyler, Scott; Tanny, Josef; Cohen, Shabtai; Bou-Zeid, Elie; Parlange, Marc; Katul, Gabriel G.

    2016-01-01

    Deviations in the Priestley-Taylor (PT) coefficient αPT from its accepted 1.26 value are analyzed over large lakes, reservoirs, and wetlands where stomatal or soil controls are minimal or absent. The data sets feature wide variations in water body sizes and climatic conditions. Neither surface temperature nor sensible heat flux variations alone, which proved successful in characterizing αPT variations over some crops, explain measured deviations in αPT over water. It is shown that the relative transport efficiency of turbulent heat and water vapor is key to explaining variations in αPT over water surfaces, thereby offering a new perspective over the concept of minimal advection or entrainment introduced by PT. Methods that allow the determination of αPT based on low-frequency sampling (i.e., 0.1 Hz) are then developed and tested, which are usable with standard meteorological sensors that filter some but not all turbulent fluctuations. Using approximations to the Gram determinant inequality, the relative transport efficiency is derived as a function of the correlation coefficient between temperature and water vapor concentration fluctuations (RTq). The proposed approach reasonably explains the measured deviations from the conventional αPT = 1.26 value even when RTq is determined from air temperature and water vapor concentration time series that are Gaussian-filtered and subsampled to a cutoff frequency of 0.1 Hz. Because over water bodies, RTq deviations from unity are often associated with advection and/or entrainment, linkages between αPT and RTq offer both a diagnostic approach to assess their significance and a prognostic approach to correct the 1.26 value when using routine meteorological measurements of temperature and humidity.

  2. Rightfulness of Summation Cut-Offs in the Albedo Problem with Gaussian Fluctuations of the Density of Scatterers

    NASA Astrophysics Data System (ADS)

    Selim, M. M.; Bezák, V.

    2003-06-01

    The one-dimensional version of the radiative transfer problem (i.e. the so-called rod model) is analysed with a Gaussian random extinction function (x). Then the optical length X = 0 Ldx(x) is a Gaussian random variable. The transmission and reflection coefficients, T(X) and R(X), are taken as infinite series. When these series (and also when the series representing T 2(X), T 2(X), R(X)T(X), etc.) are averaged, term by term, according to the Gaussian statistics, the series become divergent after averaging. As it was shown in a former paper by the authors (in Acta Physica Slovaca (2003)), a rectification can be managed when a `modified' Gaussian probability density function is used, equal to zero for X > 0 and proportional to the standard Gaussian probability density for X > 0. In the present paper, the authors put forward an alternative, showing that if the m.s.r. of X is sufficiently small in comparison with & $bar X$ ; , the standard Gaussian averaging is well functional provided that the summation in the series representing the variable T m-j (X)R j (X) (m = 1,2,..., j = 1,...,m) is truncated at a well-chosen finite term. The authors exemplify their analysis by some numerical calculations.

  3. Two Empirical Models for Land-falling Hurricane Gust Factors

    NASA Technical Reports Server (NTRS)

    Merceret, Franics J.

    2008-01-01

    Gaussian and lognormal models for gust factors as a function of height and mean windspeed in land-falling hurricanes are presented. The models were empirically derived using data from 2004 hurricanes Frances and Jeanne and independently verified using data from 2005 hurricane Wilma. The data were collected from three wind towers at Kennedy Space Center and Cape Canaveral Air Force Station with instrumentation at multiple levels from 12 to 500 feet above ground level. An additional 200-foot tower was available for the verification. Mean wind speeds from 15 to 60 knots were included in the data. The models provide formulas for the mean and standard deviation of the gust factor given the mean windspeed and height above ground. These statistics may then be used to assess the probability of exceeding a specified peak wind threshold of operational significance given a specified mean wind speed.

  4. Large scale structure from the Higgs fields of the supersymmetric standard model

    NASA Astrophysics Data System (ADS)

    Bastero-Gil, M.; di Clemente, V.; King, S. F.

    2003-05-01

    We propose an alternative implementation of the curvaton mechanism for generating the curvature perturbations which does not rely on a late decaying scalar decoupled from inflation dynamics. In our mechanism the supersymmetric Higgs scalars are coupled to the inflaton in a hybrid inflation model, and this allows the conversion of the isocurvature perturbations of the Higgs fields to the observed curvature perturbations responsible for large scale structure to take place during reheating. We discuss an explicit model which realizes this mechanism in which the μ term in the Higgs superpotential is generated after inflation by the vacuum expectation value of a singlet field. The main prediction of the model is that the spectral index should deviate significantly from unity, |n-1|˜0.1. We also expect relic isocurvature perturbations in neutralinos and baryons, but no significant departures from Gaussianity and no observable effects of gravity waves in the CMB spectrum.

  5. Black hole demographics from the M•-σ relation

    NASA Astrophysics Data System (ADS)

    Merritt, David; Ferrarese, Laura

    2001-01-01

    We analyse a sample of 32 galaxies for which a dynamical estimate of the mass of the hot stellar component, Mbulge, is available. For each of these galaxies, we calculate the mass of the central black hole, M•, using the tight empirical correlation between M• and bulge stellar velocity dispersion. The frequency function N[log(M•Mbulge)] is reasonably well described as a Gaussian with ~-2.90 and standard deviation ~0.45 the implied mean ratio of black hole mass to bulge mass is a factor of ~5 smaller than generally quoted in the literature. We present marginal evidence for a lower, average black hole mass fraction in more massive galaxies. The total mass density in black holes in the local Universe is estimated to be ~ 5×105MsolarMpc-3, consistent with that inferred from high-redshift (z~2) active galactic nuclei.

  6. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    NASA Astrophysics Data System (ADS)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  7. Contributions of Optical and Non-Optical Blur to Variation in Visual Acuity

    PubMed Central

    McAnany, J. Jason; Shahidi, Mahnaz; Applegate, Raymond A.; Zelkha, Ruth; Alexander, Kenneth R.

    2011-01-01

    Purpose To determine the relative contributions of optical and non-optical sources of intrinsic blur to variations in visual acuity (VA) among normally sighted subjects. Methods Best-corrected VA of sixteen normally sighted subjects was measured using briefly presented (59 ms) tumbling E optotypes that were either unblurred or blurred through convolution with Gaussian functions of different widths. A standard model of intrinsic blur was used to estimate each subject’s equivalent intrinsic blur (σint) and VA for the unblurred tumbling E (MAR0). For 14 subjects, a radially averaged optical point spread function due to higher-order aberrations was derived by Shack-Hartmann aberrometry and fit with a Gaussian function. The standard deviation of the best-fit Gaussian function defined optical blur (σopt). An index of non-optical blur (η) was defined as: 1-σopt/σint. A control experiment was conducted on 5 subjects to evaluate the effect of stimulus duration on MAR0 and σint. Results Log MAR0 for the briefly presented E was correlated significantly with log σint (r = 0.95, p < 0.01), consistent with previous work. However, log MAR0 was not correlated significantly with log σopt (r = 0.46, p = 0.11). For subjects with log MAR0 equivalent to approximately 20/20 or better, log MAR0 was independent of log η, whereas for subjects with larger log MAR0 values, log MAR0 was proportional to log η. The control experiment showed a statistically significant effect of stimulus duration on log MAR0 (p < 0.01) but a non-significant effect on σint (p = 0.13). Conclusions The relative contributions of optical and non-optical blur to VA varied among the subjects, and were related to the subject’s VA. Evaluating optical and non-optical blur may be useful for predicting changes in VA following procedures that improve the optics of the eye in patients with both optical and non-optical sources of VA loss. PMID:21460756

  8. 7 CFR 400.204 - Notification of deviation from standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...

  9. SU-E-T-586: Optimal Determination of Tolerance Level for Radiation Dose Delivery Verification in An in Vivo Dosimetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y; Souri, S; Gill, G

    Purpose: To statistically determine the optimal tolerance level in the verification of delivery dose compared to the planned dose in an in vivo dosimetry system in radiotherapy. Methods: The LANDAUER MicroSTARii dosimetry system with screened nanoDots (optically stimulated luminescence dosimeters) was used for in vivo dose measurements. Ideally, the measured dose should match with the planned dose and falls within a normal distribution. Any deviation from the normal distribution may be redeemed as a mismatch, therefore a potential sign of the dose misadministration. Randomly mis-positioned nanoDots can yield a continuum background distribution. A percentage difference of the measured dose tomore » its corresponding planned dose (ΔD) can be used to analyze combined data sets for different patients. A model of a Gaussian plus a flat function was used to fit the ΔD distribution. Results: Total 434 nanoDot measurements for breast cancer patients were collected across a period of three months. The fit yields a Gaussian mean of 2.9% and a standard deviation (SD) of 5.3%. The observed shift of the mean from zero is attributed to the machine output bias and calibration of the dosimetry system. A pass interval of −2SD to +2SD was applied and a mismatch background was estimated to be 4.8%. With such a tolerance level, one can expect that 99.99% of patients should pass the verification and at most 0.011% might have a potential dose misadministration that may not be detected after 3 times of repeated measurements. After implementation, a number of new start breast cancer patients were monitored and the measured pass rate is consistent with the model prediction. Conclusion: It is feasible to implement an optimal tolerance level in order to maintain a low limit of potential dose misadministration while still to keep a relatively high pass rate in radiotherapy delivery verification.« less

  10. Statistical characterization of high-to-medium frequency mesoscale gravity waves by lidar-measured vertical winds and temperatures in the MLT

    NASA Astrophysics Data System (ADS)

    Lu, Xian; Chu, Xinzhao; Li, Haoyu; Chen, Cao; Smith, John A.; Vadas, Sharon L.

    2017-09-01

    We present the first statistical study of gravity waves with periods of 0.3-2.5 h that are persistent and dominant in the vertical winds measured with the University of Colorado STAR Na Doppler lidar in Boulder, CO (40.1°N, 105.2°W). The probability density functions of the wave amplitudes in temperature and vertical wind, ratios of these two amplitudes, phase differences between them, and vertical wavelengths are derived directly from the observations. The intrinsic period and horizontal wavelength of each wave are inferred from its vertical wavelength, amplitude ratio, and a designated eddy viscosity by applying the gravity wave polarization and dispersion relations. The amplitude ratios are positively correlated with the ground-based periods with a coefficient of 0.76. The phase differences between the vertical winds and temperatures (φW -φT) follow a Gaussian distribution with 84.2±26.7°, which has a much larger standard deviation than that predicted for non-dissipative waves ( 3.3°). The deviations of the observed phase differences from their predicted values for non-dissipative waves may indicate wave dissipation. The shorter-vertical-wavelength waves tend to have larger phase difference deviations, implying that the dissipative effects are more significant for shorter waves. The majority of these waves have the vertical wavelengths ranging from 5 to 40 km with a mean and standard deviation of 18.6 and 7.2 km, respectively. For waves with similar periods, multiple peaks in the vertical wavelengths are identified frequently and the ones peaking in the vertical wind are statistically longer than those peaking in the temperature. The horizontal wavelengths range mostly from 50 to 500 km with a mean and median of 180 and 125 km, respectively. Therefore, these waves are mesoscale waves with high-to-medium frequencies. Since they have recently become resolvable in high-resolution general circulation models (GCMs), this statistical study provides an important and timely reference for them.

  11. Spectral Characteristics of the He i D3 Line in a Quiescent Prominence Observed by THEMIS

    NASA Astrophysics Data System (ADS)

    Koza, Július; Rybák, Ján; Gömöry, Peter; Kozák, Matúš; López Ariste, Arturo

    2017-08-01

    We analyze the observations of a quiescent prominence acquired by the Téléscope Heliographique pour l'Étude du Magnetisme et des Instabilités Solaires (THEMIS) in the He i 5876 Å (He i D3) multiplet aiming to measure the spectral characteristics of the He i D3 profiles and to find for them an adequate fitting model. The component characteristics of the He i D3 Stokes I profiles are measured by the fitting system by approximating them with a double Gaussian. This model yields an He i D3 component peak intensity ratio of 5.5±0.4, which differs from the value of 8 expected in the optically thin limit. Most of the measured Doppler velocities lie in the interval ± 5 km s-1, with a standard deviation of ± 1.7 km s-1 around the peak value of 0.4 km s-1. The wide distribution of the full-width at half maximum has two maxima at 0.25 Å and 0.30 Å for the He i D3 blue component and two maxima at 0.22 Å and 0.31 Å for the red component. The width ratio of the components is 1.04±0.18. We show that the double-Gaussian model systematically underestimates the blue wing intensities. To solve this problem, we invoke a two-temperature multi-Gaussian model, consisting of two double-Gaussians, which provides a better representation of He i D3 that is free of the wing intensity deficit. This model suggests temperatures of 11.5 kK and 91 kK, respectively, for the cool and the hot component of the target prominence. The cool and hot components of a typical He i D3 profile have component peak intensity ratios of 6.6 and 8, implying a prominence geometrical width of 17 Mm and an optical thickness of 0.3 for the cool component, while the optical thickness of the hot component is negligible. These prominence parameters seem to be realistic, suggesting the physical adequacy of the multi-Gaussian model with important implications for interpreting He i D3 spectropolarimetry by current inversion codes.

  12. The Standard Deviation of Launch Vehicle Environments

    NASA Technical Reports Server (NTRS)

    Yunis, Isam

    2005-01-01

    Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.

  13. An apodized Kepler periodogram for separating planetary and stellar activity signals

    PubMed Central

    Gregory, Philip C.

    2016-01-01

    A new apodized Keplerian (AK) model is proposed for the analysis of precision radial velocity (RV) data to model both planetary and stellar activity (SA) induced RV signals. A symmetrical Gaussian apodization function with unknown width and centre can distinguish planetary signals from SA signals on the basis of the span of the apodization window. The general model for m AK signals includes a linear regression term between RV and the SA diagnostic log (R′hk), as well as an extra Gaussian noise term with unknown standard deviation. The model parameters are explored using a Bayesian fusion Markov chain Monte Carlo code. A differential version of the generalized Lomb–Scargle periodogram that employs a control diagnostic provides an additional way of distinguishing SA signals and helps guide the choice of new periods. Results are reported for a recent international RV blind challenge which included multiple state-of-the-art simulated data sets supported by a variety of SA diagnostics. In the current implementation, the AK method achieved a reduction in SA noise by a factor of approximately 6. Final parameter estimates for the planetary candidates are derived from fits that include AK signals to model the SA components and simple Keplerians to model the planetary candidates. Preliminary results are also reported for AK models augmented by a moving average component that allows for correlations in the residuals. PMID:27346979

  14. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  15. BOOK REVIEW: The Cosmic Microwave Background The Cosmic Microwave Background

    NASA Astrophysics Data System (ADS)

    Coles, Peter

    2009-08-01

    With the successful launch of the European Space Agency's Planck satellite earlier this year the cosmic microwave background (CMB) is once again the centre of attention for cosmologists around the globe. Since its accidental discovery in 1964 by Arno Penzias and Robert Wilson, this relic of the Big Bang has been subjected to intense scrutiny by generation after generation of experiments and has gradually yielded up answers to the deepest questions about the origin of our Universe. Most recently, the Wilkinson Microwave Anisotropy Probe (WMAP) has made a full-sky analysis of the pattern of temperature and polarization variations that helped establish a new standard cosmological model, confirmed the existence of dark matter and dark energy, and provided strong evidence that there was an epoch of primordial inflation. Ruth Durrer's book reflects the importance of the CMB for future developments in this field. Aimed at graduate students and established researchers, it consists of a basic introduction to cosmology and the theory of primordial perturbations followed by a detailed explanation of how these manifest themselves as measurable variations in the present-day radiation field. It then focuses on the statistical methods needed to obtain accurate estimates of the parameters of the standard cosmological model, and finishes with a discussion of the effect of gravitational lensing on the CMB and on the evolution of its spectrum. The book apparently grew out of various lecture notes on CMB anisotropies for graduate courses given by the author. Its level and scope are well matched to the needs of such an audience and the presentation is clear and well-organized. I am sure that this book will be a useful reference for more senior scientists too. If I have a criticism, it is not about what is in the book but what is omitted. In my view, one of the most exciting possibilities for future CMB missions, including Planck, is the possibility that they might discover physics beyond that which the current standard model can describe. 'Thinking outside the box' has become a cliché, but it is what graduate students should be encouraged to do. For example, the standard cosmological model entails the assumption, motivated by the simplest theories of inflation, that the primordial density fluctuations are described by Gaussian statistics. The detection of any deviations from Gaussian behaviour in the radiation field would therefore offer us an exciting window into the detailed physics of inflation or other departures from the standard model. Although primordial non-Gaussianity is an extremely active subject of contemporary cosmological research, it is barely mentioned in this book. This is a regrettable omission in an otherwise commendable volume.

  16. Robust radio interferometric calibration using the t-distribution

    NASA Astrophysics Data System (ADS)

    Kazemi, S.; Yatawatta, S.

    2013-10-01

    A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.

  17. Numerical modeling of macrodispersion in heterogeneous media: a comparison of multi-Gaussian and non-multi-Gaussian models

    NASA Astrophysics Data System (ADS)

    Wen, Xian-Huan; Gómez-Hernández, J. Jaime

    1998-03-01

    The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.

  18. 7 CFR 400.174 - Notification of deviation from financial standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...

  19. De-blending deep Herschel surveys: A multi-wavelength approach

    NASA Astrophysics Data System (ADS)

    Pearson, W. J.; Wang, L.; van der Tak, F. F. S.; Hurley, P. D.; Burgarella, D.; Oliver, S. J.

    2017-07-01

    Aims: Cosmological surveys in the far-infrared are known to suffer from confusion. The Bayesian de-blending tool, XID+, currently provides one of the best ways to de-confuse deep Herschel SPIRE images, using a flat flux density prior. This work is to demonstrate that existing multi-wavelength data sets can be exploited to improve XID+ by providing an informed prior, resulting in more accurate and precise extracted flux densities. Methods: Photometric data for galaxies in the COSMOS field were used to constrain spectral energy distributions (SEDs) using the fitting tool CIGALE. These SEDs were used to create Gaussian prior estimates in the SPIRE bands for XID+. The multi-wavelength photometry and the extracted SPIRE flux densities were run through CIGALE again to allow us to compare the performance of the two priors. Inferred ALMA flux densities (FinferALMA), at 870 μm and 1250 μm, from the best fitting SEDs from the second CIGALE run were compared with measured ALMA flux densities (FmeasALMA) as an independent performance validation. Similar validations were conducted with the SED modelling and fitting tool MAGPHYS and modified black-body functions to test for model dependency. Results: We demonstrate a clear improvement in agreement between the flux densities extracted with XID+ and existing data at other wavelengths when using the new informed Gaussian prior over the original uninformed prior. The residuals between FmeasALMA and FinferALMA were calculated. For the Gaussian priors these residuals, expressed as a multiple of the ALMA error (σ), have a smaller standard deviation, 7.95σ for the Gaussian prior compared to 12.21σ for the flat prior; reduced mean, 1.83σ compared to 3.44σ; and have reduced skew to positive values, 7.97 compared to 11.50. These results were determined to not be significantly model dependent. This results in statistically more reliable SPIRE flux densities and hence statistically more reliable infrared luminosity estimates. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  20. Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks

    DTIC Science & Technology

    2016-04-01

    Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard

  1. The formation of cosmic structure in a texture-seeded cold dark matter cosmogony

    NASA Technical Reports Server (NTRS)

    Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III

    1992-01-01

    The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.

  2. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  3. Differential Si ring resonators for label-free biosensing

    NASA Astrophysics Data System (ADS)

    Taniguchi, Tomoya; Yokoyama, Shuhei; Amemiya, Yoshiteru; Ikeda, Takeshi; Kuroda, Akio; Yokoyama, Shin

    2016-04-01

    Differential Si ring optical resonator sensors have been fabricated. Their detection sensitivity was 10-3-10-2% for sucrose solution, which corresponds to a sensitivity of ˜1.0 ng/ml for prostate-specific antigen (PSA), which is satisfactory for practical use. In the differential sensing the input light is incident to two rings, and one of the outputs is connected to a π phase shifter then the two outputs are merged again. For the differential detection, not only is the common-mode noise canceled, resulting in high sensitivity, but also the temperature stability is much improved. A fluid channel is fabricated so that the detecting liquid flows to the detection ring and the reference liquid flows to the reference ring. We have proposed a method of obtaining a constant sensitivity for the integrated sensors even though the resonance wavelengths of the two rings of the differential sensor are slightly different. It was found that a region exists with a linear relationship between the differential output and the difference in the resonance wavelengths of the two rings. By intentionally differentiating the resonance wavelengths in this linear region, the sensors have a constant sensitivity. Many differential sensors with different ring spaces have been fabricated and the output scattering characteristics were statistically evaluated. As a result, a standard deviation of resonance wavelength σ = 8 × 10-3 nm was obtained for a ring space of 31 µm. From the width of the linear region and the standard deviation, it was estimated from the Gaussian distribution of the resonance wavelength that 93.8% of the devices have the same sensitivity.

  4. A general relativistic rotating evolutionary universe—Part II

    NASA Astrophysics Data System (ADS)

    Berman, Marcelo Samuel

    2008-06-01

    As a sequel to Berman (Astrophys. Space Sci., 2008b), we show that the rotation of the Universe can be dealt by generalised Gaussian metrics, defined in this paper. Robertson-Walker’s metric has been employed with proper-time, in its standard applications; the generalised Gaussian metric implies in the use of a non-constant temporal metric coefficient modifying Robertson-Walker’s standard form. Experimental predictions are made.

  5. Identifying outliers of non-Gaussian groundwater state data based on ensemble estimation for long-term trends

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kueyoung; Choung, Sungwook; Chung, Il Moon

    2017-05-01

    A hydrogeological dataset often includes substantial deviations that need to be inspected. In the present study, three outlier identification methods - the three sigma rule (3σ), inter quantile range (IQR), and median absolute deviation (MAD) - that take advantage of the ensemble regression method are proposed by considering non-Gaussian characteristics of groundwater data. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method shows limitation by identifying excessive false outliers, which may be overcome by its joint application with other methods (for example, the 3σ rule and MAD methods). The proposed methods can be also applied as potential tools for the detection of future anomalies by model training based on currently available data.

  6. Testing for the Gaussian nature of cosmological density perturbations through the three-point temperature correlation function

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1993-01-01

    One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian where seeds produced by topological defects tend to be non-Gaussian. The three-point correlation function of the temperature anisotropy of the cosmic microwave background radiation (CBR) provides a sensitive test of this aspect of the primordial density field. In this paper, this function is calculated in the general context of various allowed non-Gaussian models. It is shown that the Cosmic Background Explorer and the forthcoming South Pole and balloon CBR anisotropy data may be able to provide a crucial test of the Gaussian nature of the perturbations.

  7. 1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...

  8. Improving particle filters in rainfall-runoff models: application of the resample-move step and development of the ensemble Gaussian particle filter

    NASA Astrophysics Data System (ADS)

    Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.

    2012-12-01

    The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.

  9. GaussianCpG: a Gaussian model for detection of CpG island in human genome sequences.

    PubMed

    Yu, Ning; Guo, Xuan; Zelikovsky, Alexander; Pan, Yi

    2017-05-24

    As crucial markers in identifying biological elements and processes in mammalian genomes, CpG islands (CGI) play important roles in DNA methylation, gene regulation, epigenetic inheritance, gene mutation, chromosome inactivation and nuclesome retention. The generally accepted criteria of CGI rely on: (a) %G+C content is ≥ 50%, (b) the ratio of the observed CpG content and the expected CpG content is ≥ 0.6, and (c) the general length of CGI is greater than 200 nucleotides. Most existing computational methods for the prediction of CpG island are programmed on these rules. However, many experimentally verified CpG islands deviate from these artificial criteria. Experiments indicate that in many cases %G+C is < 50%, CpG obs /CpG exp varies, and the length of CGI ranges from eight nucleotides to a few thousand of nucleotides. It implies that CGI detection is not just a straightly statistical task and some unrevealed rules probably are hidden. A novel Gaussian model, GaussianCpG, is developed for detection of CpG islands on human genome. We analyze the energy distribution over genomic primary structure for each CpG site and adopt the parameters from statistics of Human genome. The evaluation results show that the new model can predict CpG islands efficiently by balancing both sensitivity and specificity over known human CGI data sets. Compared with other models, GaussianCpG can achieve better performance in CGI detection. Our Gaussian model aims to simplify the complex interaction between nucleotides. The model is computed not by the linear statistical method but by the Gaussian energy distribution and accumulation. The parameters of Gaussian function are not arbitrarily designated but deliberately chosen by optimizing the biological statistics. By using the pseudopotential analysis on CpG islands, the novel model is validated on both the real and artificial data sets.

  10. Temporal variability of spectro-temporal receptive fields in the anesthetized auditory cortex.

    PubMed

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn

    2014-01-01

    Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF) estimates with characteristic temporal resolution 5-30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms) overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.

  11. SU-E-T-617: Plan Quality Estimation of Intensity-Modulated Radiotherapy Cases for Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koo, J; Yoon, M; Chung, W

    Purpose: To estimate the planning quality of intensity-modulated radiotherapy in lung cancer cases and to provide preliminary data for the development of a planning quality assurance algorithm. Methods: 42 IMRT plans previously used in cases of solitary lung cancers were collected. Organs in or near the thoracic cavity, such as lung (ipsilateral, contralateral), heart, liver, esophagus, cord and bronchus were considered as organs at risk (OARs) in this study. The coverage index (CVI), conformity index (CI), homogeneity index (HI), volume, irregularity (standard deviation of center-surface distance) were used to compare PTV dose characteristics. The effective uniform dose (EUD), V10Gy, andmore » V20Gy of the OARs were used to compare OAR dose characteristics. Results: Average CVI, CI, HI values were 0.9, 0.8, 0.1, respectively. CVI and CI had narrow Gaussian distribution curves without a singular value, but one case had a relatively high (0.25) HI because of location and irregular shape (Irregularity of 18.5 when average was 12.5) of PTV. EUDs tended to decrease as OAR-PTV distance increased and OAR-PTV overlap volume decreased. Conclusion: This work indicates the potential for significant plan quality deviation of similar lung cancer cases. Considering that this study were from a single department, differences in the treatment results for a given patient would be much more pronounced if multiple departments (and therefore more planners) were involved. Therefore, further examination of QA protocols is needed to reduce deviations in radiation treatment planning.« less

  12. Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide

    DTIC Science & Technology

    1981-02-01

    SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway

  13. A Visual Model for the Variance and Standard Deviation

    ERIC Educational Resources Information Center

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  14. Variability of pesticide detections and concentrations in field replicate water samples collected for the National Water-Quality Assessment Program, 1992-97

    USGS Publications Warehouse

    Martin, Jeffrey D.

    2002-01-01

    Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.

  15. Basic life support: evaluation of learning using simulation and immediate feedback devices1.

    PubMed

    Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi

    2017-10-30

    to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.

  16. Comment on "Universal relation between skewness and kurtosis in complex dynamics"

    NASA Astrophysics Data System (ADS)

    Celikoglu, Ahmet; Tirnakli, Ugur

    2015-12-01

    In a recent paper [M. Cristelli, A. Zaccaria, and L. Pietronero, Phys. Rev. E 85, 066108 (2012), 10.1103/PhysRevE.85.066108], the authors analyzed the relation between skewness and kurtosis for complex dynamical systems, and they identified two power-law regimes of non-Gaussianity, one of which scales with an exponent of 2 and the other with 4 /3 . They concluded that the observed relation is a universal fact in complex dynamical systems. In this Comment, we test the proposed universal relation between skewness and kurtosis with a large number of synthetic data, and we show that in fact it is not a universal relation and originates only due to the small number of data points in the datasets considered. The proposed relation is tested using a family of non-Gaussian distribution known as q -Gaussians. We show that this relation disappears for sufficiently large datasets provided that the fourth moment of the distribution is finite. We find that kurtosis saturates to a single value, which is of course different from the Gaussian case (K =3 ), as the number of data is increased, and this indicates that the kurtosis will converge to a finite single value if all moments of the distribution up to fourth are finite. The converged kurtosis value for the finite fourth-moment distributions and the number of data points needed to reach this value depend on the deviation of the original distribution from the Gaussian case.

  17. MR-Consistent Simultaneous Reconstruction of Attenuation and Activity for Non-TOF PET/MR

    NASA Astrophysics Data System (ADS)

    Heußer, Thorsten; Rank, Christopher M.; Freitag, Martin T.; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Beyer, Thomas; Kachelrieß, Marc

    2016-10-01

    Attenuation correction (AC) is required for accurate quantification of the reconstructed activity distribution in positron emission tomography (PET). For simultaneous PET/magnetic resonance (MR), however, AC is challenging, since the MR images do not provide direct information on the attenuating properties of the underlying tissue. Standard MR-based AC does not account for the presence of bone and thus leads to an underestimation of the activity distribution. To improve quantification for non-time-of-flight PET/MR, we propose an algorithm which simultaneously reconstructs activity and attenuation distribution from the PET emission data using available MR images as anatomical prior information. The MR information is used to derive voxel-dependent expectations on the attenuation coefficients. The expectations are modeled using Gaussian-like probability functions. An iterative reconstruction scheme incorporating the prior information on the attenuation coefficients is used to update attenuation and activity distribution in an alternating manner. We tested and evaluated the proposed algorithm for simulated 3D PET data of the head and the pelvis region. Activity deviations were below 5% in soft tissue and lesions compared to the ground truth whereas standard MR-based AC resulted in activity underestimation values of up to 12%.

  18. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  19. GAUSSIAN 76: An ab initio Molecular Orbital Program

    DOE R&D Accomplishments Database

    Binkley, J. S.; Whiteside, R.; Hariharan, P. C.; Seeger, R.; Hehre, W. J.; Lathan, W. A.; Newton, M. D.; Ditchfield, R.; Pople, J. A.

    1978-01-01

    Gaussian 76 is a general-purpose computer program for ab initio Hartree-Fock molecular orbital calculations. It can handle basis sets involving s, p and d-type Gaussian functions. Certain standard sets (STO-3G, 4-31G, 6-31G*, etc.) are stored internally for easy use. Closed shell (RHF) or unrestricted open shell (UHF) wave functions can be obtained. Facilities are provided for geometry optimization to potential minima and for limited potential surface scans.

  20. An empirical description of the dispersion of 5th and 95th percentiles in worldwide anthropometric data applied to estimating accommodation with unknown correlation values.

    PubMed

    Albin, Thomas J; Vink, Peter

    2015-01-01

    Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.

  1. Gaussian and Lognormal Models of Hurricane Gust Factors

    NASA Technical Reports Server (NTRS)

    Merceret, Frank

    2009-01-01

    A document describes a tool that predicts the likelihood of land-falling tropical storms and hurricanes exceeding specified peak speeds, given the mean wind speed at various heights of up to 500 feet (150 meters) above ground level. Empirical models to calculate mean and standard deviation of the gust factor as a function of height and mean wind speed were developed in Excel based on data from previous hurricanes. Separate models were developed for Gaussian and offset lognormal distributions for the gust factor. Rather than forecasting a single, specific peak wind speed, this tool provides a probability of exceeding a specified value. This probability is provided as a function of height, allowing it to be applied at a height appropriate for tall structures. The user inputs the mean wind speed, height, and operational threshold. The tool produces the probability from each model that the given threshold will be exceeded. This application does have its limits. They were tested only in tropical storm conditions associated with the periphery of hurricanes. Winds of similar speed produced by non-tropical system may have different turbulence dynamics and stability, which may change those winds statistical characteristics. These models were developed along the Central Florida seacoast, and their results may not accurately extrapolate to inland areas, or even to coastal sites that are different from those used to build the models. Although this tool cannot be generalized for use in different environments, its methodology could be applied to those locations to develop a similar tool tuned to local conditions.

  2. Uncertainty estimation of predictions of peptides' chromatographic retention times in shotgun proteomics.

    PubMed

    Maboudi Afkham, Heydar; Qiu, Xuanbin; The, Matthew; Käll, Lukas

    2017-02-15

    Liquid chromatography is frequently used as a means to reduce the complexity of peptide-mixtures in shotgun proteomics. For such systems, the time when a peptide is released from a chromatography column and registered in the mass spectrometer is referred to as the peptide's retention time . Using heuristics or machine learning techniques, previous studies have demonstrated that it is possible to predict the retention time of a peptide from its amino acid sequence. In this paper, we are applying Gaussian Process Regression to the feature representation of a previously described predictor E lude . Using this framework, we demonstrate that it is possible to estimate the uncertainty of the prediction made by the model. Here we show how this uncertainty relates to the actual error of the prediction. In our experiments, we observe a strong correlation between the estimated uncertainty provided by Gaussian Process Regression and the actual prediction error. This relation provides us with new means for assessment of the predictions. We demonstrate how a subset of the peptides can be selected with lower prediction error compared to the whole set. We also demonstrate how such predicted standard deviations can be used for designing adaptive windowing strategies. lukas.kall@scilifelab.se. Our software and the data used in our experiments is publicly available and can be downloaded from https://github.com/statisticalbiotechnology/GPTime . © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Analysis of fluid flow and solute transport through a single fracture with variable apertures intersecting a canister: Comparison between fractal and Gaussian fractures

    NASA Astrophysics Data System (ADS)

    Liu, L.; Neretnieks, I.

    Canisters with spent nuclear fuel will be deposited in fractured crystalline rock in the Swedish concept for a final repository. The fractures intersect the canister holes at different angles and they have variable apertures and therefore locally varying flowrates. Our previous model with fractures with a constant aperture and a 90° intersection angle is now extended to arbitrary intersection angles and stochastically variable apertures. It is shown that the previous basic model can be simply amended to account for these effects. More importantly, it has been found that the distributions of the volumetric and the equivalent flow rates are all close to the Normal for both fractal and Gaussian fractures, with the mean of the distribution of the volumetric flow rate being determined solely by the hydraulic aperture, and that of the equivalent flow rate being determined by the mechanical aperture. Moreover, the standard deviation of the volumetric flow rates of the many realizations increases with increasing roughness and spatial correlation length of the aperture field, and so does that of the equivalent flow rates. Thus, two simple statistical relations can be developed to describe the stochastic properties of fluid flow and solute transport through a single fracture with spatially variable apertures. This obviates, then, the need to simulate each fracture that intersects a canister in great detail, and allows the use of complex fractures also in very large fracture network models used in performance assessment.

  4. Line-edge roughness performance targets for EUV lithography

    NASA Astrophysics Data System (ADS)

    Brunner, Timothy A.; Chen, Xuemei; Gabor, Allen; Higgins, Craig; Sun, Lei; Mack, Chris A.

    2017-03-01

    Our paper will use stochastic simulations to explore how EUV pattern roughness can cause device failure through rare events, so-called "black swans". We examine the impact of stochastic noise on the yield of simple wiring patterns with 36nm pitch, corresponding to 7nm node logic, using a local Critical Dimension (CD)-based fail criteria Contact hole failures are examined in a similar way. For our nominal EUV process, local CD uniformity variation and local Pattern Placement Error variation was observed, but no pattern failures were seen in the modest (few thousand) number of features simulated. We degraded the image quality by incorporating Moving Standard Deviation (MSD) blurring to degrade the Image Log-Slope (ILS), and were able to find conditions where pattern failures were observed. We determined the Line Width Roughness (LWR) value as a function of the ILS. By use of an artificial "step function" image degraded by various MSD blur, we were able to extend the LWR vs ILS curve into regimes that might be available for future EUV imagery. As we decreased the image quality, we observed LWR grow and also began to see pattern failures. For high image quality, we saw CD distributions that were symmetrical and close to Gaussian in shape. Lower image quality caused CD distributions that were asymmetric, with "fat tails" on the low CD side (under-exposed) which were associated with pattern failures. Similar non-Gaussian CD distributions were associated with image conditions that caused missing contact holes, i.e. CD=0.

  5. Characterization of the inhomogeneous barrier distribution in a Pt/(100)β-Ga2O3 Schottky diode via its temperature-dependent electrical properties

    NASA Astrophysics Data System (ADS)

    Jian, Guangzhong; He, Qiming; Mu, Wenxiang; Fu, Bo; Dong, Hang; Qin, Yuan; Zhang, Ying; Xue, Huiwen; Long, Shibing; Jia, Zhitai; Lv, Hangbing; Liu, Qi; Tao, Xutang; Liu, Ming

    2018-01-01

    β-Ga2O3 is an ultra-wide bandgap semiconductor with applications in power electronic devices. Revealing the transport characteristics of β-Ga2O3 devices at various temperatures is important for improving device performance and reliability. In this study, we fabricated a Pt/β-Ga2O3 Schottky barrier diode with good performance characteristics, such as a low ON-resistance, high forward current, and a large rectification ratio. Its temperature-dependent current-voltage and capacitance-voltage characteristics were measured at various temperatures. The characteristic diode parameters were derived using thermionic emission theory. The ideality factor n was found to decrease from 2.57 to 1.16 while the zero-bias barrier height Φb0 increased from 0.47 V to 1.00 V when the temperature was increased from 125 K to 350 K. This was explained by the Gaussian distribution of barrier height inhomogeneity. The mean barrier height Φ ¯ b0 = 1.27 V and zero-bias standard deviation σ0 = 0.13 V were obtained. A modified Richardson plot gave a Richardson constant A* of 36.02 A.cm-2.K-2, which is close to the theoretical value of 41.11 A.cm-2.K-2. The differences between the barrier heights determined using the capacitance-voltage and current-voltage curves were also in line with the Gaussian distribution of barrier height inhomogeneity.

  6. SU-E-T-146: Effects of Uncertainties of Radiation Sensitivity of Biological Modelling for Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oita, M; Department of Life System, Institute of Technology and Science, Graduate School, The Tokushima University; Uto, Y

    Purpose: The aim of this study was to evaluate the distribution of uncertainty of cell survival by radiation, and assesses the usefulness of stochastic biological model applying for gaussian distribution. Methods: For single cell experiments, exponentially growing cells were harvested from the standard cell culture dishes by trypsinization, and suspended in test tubes containing 1 ml of MEM(2x10{sup 6} cells/ml). The hypoxic cultures were treated with 95% N{sub 2}−5% CO{sub 2} gas for 30 minutes. In vitro radiosensitization was also measured in EMT6/KU single cells to add radiosensitizer under hypoxic conditions. X-ray irradiation was carried out by using an Xraymore » unit (Hitachi X-ray unit, model MBR-1505R3) with 0.5 mm Al/1.0 mm Cu filter, 150 kV, 4 Gy/min). In vitro assay, cells on the dish were irradiated with 1 Gy to 24 Gy, respectively. After irradiation, colony formation assays were performed. Variations of biological parameters were investigated at standard cell culture(n=16), hypoxic cell culture(n=45) and hypoxic cell culture(n=21) with radiosensitizers, respectively. The data were obtained by separate schedule to take account for the variation of radiation sensitivity of cell cycle. Results: At standard cell culture, hypoxic cell culture and hypoxic cell culture with radiosensitizers, median and standard deviation of alpha/beta ratio were 37.1±73.4 Gy, 9.8±23.7 Gy, 20.7±21.9 Gy, respectively. Average and standard deviation of D{sub 50} were 2.5±2.5 Gy, 6.1±2.2 Gy, 3.6±1.3 Gy, respectively. Conclusion: In this study, we have challenged to apply these uncertainties of parameters for the biological model. The variation of alpha values, beta values, D{sub 50} as well as cell culture might have highly affected by probability of cell death. Further research is in progress for precise prediction of the cell death as well as tumor control probability for treatment planning.« less

  7. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  8. Three-dimensional computation of laser cavity eigenmodes by the use of finite element analysis (FEA)

    NASA Astrophysics Data System (ADS)

    Altmann, Konrad; Pflaum, Christoph; Seider, David

    2004-06-01

    A new method for computing eigenmodes of a laser resonator by the use of finite element analysis (FEA) is presented. For this purpose, the scalar wave equation [Δ + k2]E(x,y,z) = 0 is transformed into a solvable 3D eigenvalue problem by separating out the propagation factor exp(-ikz) from the phasor amplitude E(x,y,z) of the time-harmonic electrical field. For standing wave resonators, the beam inside the cavity is represented by a two-wave ansatz. For cavities with parabolic optical elements the new approach has successfully been verified by the use of the Gaussian mode algorithm. For a DPSSL with a thermally lensing crystal inside the cavity the expected deviation between Gaussian approximation and numerical solution could be demonstrated clearly.

  9. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  10. Probability distributions of linear statistics in chaotic cavities and associated phase transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivo, Pierpaolo; Majumdar, Satya N.; Bohigas, Oriol

    2010-03-01

    We establish large deviation formulas for linear statistics on the N transmission eigenvalues (T{sub i}) of a chaotic cavity, in the framework of random matrix theory. Given any linear statistics of interest A=SIGMA{sub i=1}{sup N}a(T{sub i}), the probability distribution P{sub A}(A,N) of A generically satisfies the large deviation formula lim{sub N-}>{sub i}nfinity[-2 log P{sub A}(Nx,N)/betaN{sup 2}]=PSI{sub A}(x), where PSI{sub A}(x) is a rate function that we compute explicitly in many cases (conductance, shot noise, and moments) and beta corresponds to different symmetry classes. Using these large deviation expressions, it is possible to recover easily known results and to produce newmore » formulas, such as a closed form expression for v(n)=lim{sub N-}>{sub i}nfinity var(T{sub n}) (where T{sub n}=SIGMA{sub i}T{sub i}{sup n}) for arbitrary integer n. The universal limit v*=lim{sub n-}>{sub i}nfinity v(n)=1/2pibeta is also computed exactly. The distributions display a central Gaussian region flanked on both sides by non-Gaussian tails. At the junction of the two regimes, weakly nonanalytical points appear, a direct consequence of phase transitions in an associated Coulomb gas problem. Numerical checks are also provided, which are in full agreement with our asymptotic results in both real and Laplace space even for moderately small N. Part of the results have been announced by Vivo et al. [Phys. Rev. Lett. 101, 216809 (2008)].« less

  11. Financial market dynamics: superdiffusive or not?

    NASA Astrophysics Data System (ADS)

    Devi, Sandhya

    2017-08-01

    The behavior of stock market returns over a period of 1-60 d has been investigated for S&P 500 and Nasdaq within the framework of nonextensive Tsallis statistics. Even for such long terms, the distributions of the returns are non-Gaussian. They have fat tails indicating that the stock returns do not follow a random walk model. In this work, a good fit to a Tsallis q-Gaussian distribution is obtained for the distributions of all the returns using the method of Maximum Likelihood Estimate. For all the regions of data considered, the values of the scaling parameter q, estimated from 1 d returns, lie in the range 1.4-1.65. The estimated inverse mean square deviations (beta) show a power law behavior in time with exponent values between  -0.91 and  -1.1 indicating normal to mildly subdiffusive behavior. Quite often, the dynamics of market return distributions is modelled by a Fokker-Plank (FP) equation either with a linear drift and a nonlinear diffusion term or with just a nonlinear diffusion term. Both of these cases support a q-Gaussian distribution as a solution. The distributions obtained from current estimated parameters are compared with the solutions of the FP equations. For negligible drift term, the inverse mean square deviations (betaFP) from the FP model follow a power law with exponent values between  -1.25 and  -1.48 indicating superdiffusion. When the drift term is non-negligible, the corresponding betaFP do not follow a power law and become stationary after certain characteristic times that depend on the values of the drift parameter and q. Neither of these behaviors is supported by the results of the empirical fit.

  12. Mutual information of optical communication in phase-conjugating Gaussian channels

    NASA Astrophysics Data System (ADS)

    Schäfermeier, Clemens; Andersen, Ulrik L.

    2018-03-01

    In all practical communication channels, the code word consists of Gaussian states and the measurement strategy is often a Gaussian detector such as homodyning or heterodyning. We investigate the communication performance using a phase-conjugated alphabet and joint Gaussian detection in a phase-insensitive amplifying channel. We find that a communication scheme consisting of a phase-conjugating alphabet of coherent states and a joint detection strategy significantly outperforms a standard coherent-state strategy based in individual detection. Moreover, we show that the performance can be further enhanced by using entanglement and that the performance is completely independent of the gain of the phase-insensitively amplifying channel.

  13. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Levels of naturally occurring gamma radiation measured in British homes and their prediction in particular residences.

    PubMed

    Kendall, G M; Wakeford, R; Athanson, M; Vincent, T J; Carter, E J; McColl, N P; Little, M P

    2016-03-01

    Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matérn correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matérn model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matérn model.

  15. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.

    2014-12-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M {sub stellar} > 10{sup 11.56} M {sub ☉}. We study the topology at two smoothing lengths: R {sub G} = 21 h {sup –1} Mpc and R {sub G} = 34 h {sup –1} Mpc. The genus topology studied at the R {sub G} = 21 h {sup –1} Mpc scale results in the highest genusmore » amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.« less

  16. Temporal self-splitting of optical pulses

    NASA Astrophysics Data System (ADS)

    Ding, Chaoliang; Koivurova, Matias; Turunen, Jari; Pan, Liuzhan

    2018-05-01

    We present mathematical models for temporally and spectrally partially coherent pulse trains with Laguerre-Gaussian and Hermite-Gaussian Schell-model statistics as extensions of the standard Gaussian Schell model for pulse trains. We derive propagation formulas of both classes of pulsed fields in linearly dispersive media and in temporal optical systems. It is found that, in general, both types of fields exhibit time-domain self-splitting upon propagation. The Laguerre-Gaussian model leads to multiply peaked pulses, while the Hermite-Gaussian model leads to doubly peaked pulses, in the temporal far field (in dispersive media) or at the Fourier plane of a temporal system. In both model fields the character of the self-splitting phenomenon depends both on the degree of temporal and spectral coherence and on the power spectrum of the field.

  17. How do we assign punishment? The impact of minimal and maximal standards on the evaluation of deviants.

    PubMed

    Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven

    2010-09-01

    To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.

  18. Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree-Fock.

    PubMed

    Tamayo-Mendoza, Teresa; Kreisbeck, Christoph; Lindh, Roland; Aspuru-Guzik, Alán

    2018-05-23

    Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult , a Hartree-Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.

  19. Numerical Investigation of the Microscopic Heat Current Inside a Nanofluid System Based on Molecular Dynamics Simulation and Wavelet Analysis.

    PubMed

    Jia, Tao; Gao, Di

    2018-04-03

    Molecular dynamics simulation is employed to investigate the microscopic heat current inside an argon-copper nanofluid. Wavelet analysis of the microscopic heat current inside the nanofluid system is conducted. The signal of the microscopic heat current is decomposed into two parts: one is the approximation part; the other is the detail part. The approximation part is associated with the low-frequency part of the signal, and the detail part is associated with the high-frequency part of the signal. Both the probability distributions of the high-frequency and the low-frequency parts of the signals demonstrate Gaussian-like characteristics. The curves fit to data of the probability distribution of the microscopic heat current are established, and the parameters including the mean value and the standard deviation in the mathematical formulas of the curves show dramatic changes for the cases before and after adding copper nanoparticles into the argon base fluid.

  20. Observation of the Baryonic Flavor-Changing Neutral Current Decay Λ b 0 → Λµ +µ -

    DOE PAGES

    Aaltonen, T.

    2011-11-08

    The authors report the first observation of the baryonic flavor-changing neutral current decay Λ b 0 → Λµ +µ - with 24 signal events and a statistical significance of 5.8 Gaussian standard deviations. This measurement uses a pp¯ collisions data sample corresponding to 6.8 fb -1 at √s = 1.96 TeV collected by the CDF II detector at the Tevatron collider. The total and differential branching ratios for Λ b 0 → Λµ +µ - are measured. They find Β(Λ b 0 → Λµ +µ -) = [1.73 ± 0.42(stat) ± 0.55(syst)] x 10 -6. They also report the firstmore » measurement of the differential branching ratio of B s 0→φµ +µ - using 49 signal events. In addition, they report branching ratios for B +→K +µ +µ -, B 0→K 0µ +µ - and Β→ K*(892)µ +µ - decays.« less

  1. Dispersion of Heat Flux Sensors Manufactured in Silicon Technology.

    PubMed

    Ziouche, Katir; Lejeune, Pascale; Bougrioua, Zahia; Leclercq, Didier

    2016-06-09

    In this paper, we focus on the dispersion performances related to the manufacturing process of heat flux sensors realized in CMOS (Complementary metal oxide semi-conductor) compatible 3-in technology. In particular, we have studied the performance dispersion of our sensors and linked these to the physical characteristics of dispersion of the materials used. This information is mandatory to ensure low-cost manufacturing and especially to reduce production rejects during the fabrication process. The results obtained show that the measured sensitivity of the sensors is in the range 3.15 to 6.56 μV/(W/m²), associated with measured resistances ranging from 485 to 675 kΩ. The dispersions correspond to a Gaussian-type distribution with more than 90% determined around average sensitivity S e ¯ = 4.5 µV/(W/m²) and electrical resistance R ¯ = 573.5 kΩ within the interval between the average and, more or less, twice the relative standard deviation.

  2. Intraventricular Flow Velocity Vector Visualization Based on the Continuity Equation and Measurements of Vorticity and Wall Shear Stress

    NASA Astrophysics Data System (ADS)

    Itatani, Keiichi; Okada, Takashi; Uejima, Tokuhisa; Tanaka, Tomohiko; Ono, Minoru; Miyaji, Kagami; Takenaka, Katsu

    2013-07-01

    We have developed a system to estimate velocity vector fields inside the cardiac ventricle by echocardiography and to evaluate several flow dynamical parameters to assess the pathophysiology of cardiovascular diseases. A two-dimensional continuity equation was applied to color Doppler data using speckle tracking data as boundary conditions, and the velocity component perpendicular to the echo beam line was obtained. We determined the optimal smoothing method of the color Doppler data, and the 8-pixel standard deviation of the Gaussian filter provided vorticity without nonphysiological stripe shape noise. We also determined the weight function at the bilateral boundaries given by the speckle tracking data of the ventricle or vascular wall motion, and the weight function linear to the distance from the boundary provided accurate flow velocities not only inside the vortex flow but also around near-wall regions on the basis of the results of the validation of a digital phantom of a pipe flow model.

  3. Statistical analysis of Hasegawa-Wakatani turbulence

    NASA Astrophysics Data System (ADS)

    Anderson, Johan; Hnat, Bogdan

    2017-06-01

    Resistive drift wave turbulence is a multipurpose paradigm that can be used to understand transport at the edge of fusion devices. The Hasegawa-Wakatani model captures the essential physics of drift turbulence while retaining the simplicity needed to gain a qualitative understanding of this process. We provide a theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent events in Hasegawa-Wakatani turbulence with enforced equipartition of energy in large scale zonal flows, and small scale drift turbulence. We find that for a wide range of adiabatic index values, the stochastic component representing the small scale turbulent eddies of the flow, obtained from the autoregressive integrated moving average model, exhibits super-diffusive statistics, consistent with intermittent transport. The PDFs of large events (above one standard deviation) are well approximated by the Laplace distribution, while small events often exhibit a Gaussian character. Furthermore, there exists a strong influence of zonal flows, for example, via shearing and then viscous dissipation maintaining a sub-diffusive character of the fluxes.

  4. Scaling properties and universality of first-passage-time probabilities in financial markets

    NASA Astrophysics Data System (ADS)

    Perelló, Josep; Gutiérrez-Roig, Mario; Masoliver, Jaume

    2011-12-01

    Financial markets provide an ideal frame for the study of crossing or first-passage time events of non-Gaussian correlated dynamics, mainly because large data sets are available. Tick-by-tick data of six futures markets are herein considered, resulting in fat-tailed first-passage time probabilities. The scaling of the return with its standard deviation collapses the probabilities of all markets examined—and also for different time horizons—into single curves, suggesting that first-passage statistics is market independent (at least for high-frequency data). On the other hand, a very closely related quantity, the survival probability, shows, away from the center and tails of the distribution, a hyperbolic t-1/2 decay typical of a Markovian dynamics, albeit the existence of memory in markets. Modifications of the Weibull and Student distributions are good candidates for the phenomenological description of first-passage time properties under certain regimes. The scaling strategies shown may be useful for risk control and algorithmic trading.

  5. A study of atmospheric diffusion from the LANDSAT imagery. [pollution transport over the ocean

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Viswanadham, Y.; Torsani, J. A.

    1981-01-01

    LANDSAT multispectral scanner data of the smoke plumes which originated in eastern Cabo Frio, Brazil and crossed over into the Atlantic Ocean, are analyzed to illustrate how high resolution LANDSAT imagery can aid meteorologists in evaluating specific air pollution events. The eleven LANDSAT images selected are for different months and years. The results show that diffusion is governed primarily by water and air temperature differences. With colder water, low level air is very stable and the vertical diffusion is minimal; but water warmer than the air induces vigorous diffusion. The applicability of three empirical methods for determining the horizontal eddy diffusivity coefficient in the Gaussian plume formula was evaluated with the estimated standard deviation of the crosswind distribution of material in the plume from the LANDSAT imagery. The vertical diffusion coefficient in stable conditions is estimated using Weinstock's formulation. These results form a data base for use in the development and validation of meso scale atmospheric diffusion models.

  6. Giant current fluctuations in an overheated single-electron transistor

    NASA Astrophysics Data System (ADS)

    Laakso, M. A.; Heikkilä, T. T.; Nazarov, Yuli V.

    2010-11-01

    Interplay of cotunneling and single-electron tunneling in a thermally isolated single-electron transistor leads to peculiar overheating effects. In particular, there is an interesting crossover interval where the competition between cotunneling and single-electron tunneling changes to the dominance of the latter. In this interval, the current exhibits anomalous sensitivity to the effective electron temperature of the transistor island and its fluctuations. We present a detailed study of the current and temperature fluctuations at this interesting point. The methods implemented allow for a complete characterization of the distribution of the fluctuating quantities, well beyond the Gaussian approximation. We reveal and explore the parameter range where, for sufficiently small transistor islands, the current fluctuations become gigantic. In this regime, the optimal value of the current, its expectation value, and its standard deviation differ from each other by parametrically large factors. This situation is unique for transport in nanostructures and for electron transport in general. The origin of this spectacular effect is the exponential sensitivity of the current to the fluctuating effective temperature.

  7. The impact of physical activity and sex differences on intraindividual variability in inhibitory performance in older adults.

    PubMed

    Fagot, Delphine; Chicherio, Christian; Albinet, Cédric T; André, Nathalie; Audiffren, Michel

    2017-09-04

    It is well-known that processing speed and executive functions decline with advancing age. However, physical activity (PA) has a positive impact on cognitive performances in aging, specifically for inhibition. Less is known concerning intraindividual variability (iiV) in reaction times. This study aims to investigate the influence of PA and sex differences on iiV in inhibitory performance during aging. Healthy adults were divided into active and sedentary groups according to PA level. To analyse iiV in reaction times, individual mean, standard deviation and the ex-Gaussian parameters were considered. An interaction between activity level and sex was revealed, sedentary females being slower and more variable than sedentary men. No sex differences were found in the active groups. These results indicate that the negative impact of sedentariness on cognitive performance in older age is stronger for females. The present findings underline the need to consider sex differences in active aging approaches.

  8. Observation of the Ξ(b)(0) baryon.

    PubMed

    Aaltonen, T; Álvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bauer, G; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Bland, K R; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brigliadori, L; Brisuda, A; Bromberg, C; Brucken, E; Bucciantonio, M; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clarke, C; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; De Cecco, S; De Lorenzo, G; Dell'Orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, M; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Funakoshi, Y; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Grinstein, S; Grosso-Pilcher, C; Group, R C; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hidas, D; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kasmi, A; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirby, M; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; LeCompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Limosani, A; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, Q; Liu, T; Lockwitz, S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Makhoul, K; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Riddick, T; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rubbo, F; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Sakurai, Y; Santi, L; Sartori, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shreyber, I; Simonenko, A; Sinervo, P; Sissakian, A; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stancari, M; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Tu, Y; Ukegawa, F; Uozumi, S; Varganov, A; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vizán, J; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamaoka, J; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zucchelli, S

    2011-09-02

    The observation of the bottom, strange baryon Ξ(b)(0) through the decay chain Ξ(b)(0)→Ξ(c)(+)π-, where ΞΞ(c)(+)→Ξ- π+ π+, Ξ-→Λπ-, and Λ→pπ-, is reported by using data corresponding to an integrated luminosity of 4.2  fb(-1) from pp collisions at square root(s)=1.96  TeV recorded with the Collider Detector at Fermilab. A signal of 25.3(-5.4)(+5.6) candidates is observed whose probability of arising from a background fluctuation is 3.6×10(-12), corresponding to 6.8 gaussian standard deviations. The Ξ(b)(0) mass is measured to be 5787.8±5.0(stat)±1.3(syst)  MeV/c2. In addition, the Ξ(b)- baryon is observed through the process Ξ(b)-→Ξ(c)(0)π-, where Ξ(c)(0)→Ξ- π+, Ξ-→Λπ-, and Λ→pπ-.

  9. Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree–Fock

    PubMed Central

    2018-01-01

    Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult, a Hartree–Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.

  10. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  11. Cosmology with gamma-ray bursts. II. Cosmography challenges and cosmological scenarios for the accelerated Universe

    NASA Astrophysics Data System (ADS)

    Demianski, Marek; Piedipalumbo, Ester; Sawant, Disha; Amati, Lorenzo

    2017-02-01

    Context. Explaining the accelerated expansion of the Universe is one of the fundamental challenges in physics today. Cosmography provides information about the evolution of the universe derived from measured distances, assuming only that the space time geometry is described by the Friedman-Lemaitre-Robertson-Walker metric, and adopting an approach that effectively uses only Taylor expansions of basic observables. Aims: We perform a high-redshift analysis to constrain the cosmographic expansion up to the fifth order. It is based on the Union2 type Ia supernovae data set, the gamma-ray burst Hubble diagram, a data set of 28 independent measurements of the Hubble parameter, baryon acoustic oscillations measurements from galaxy clustering and the Lyman-α forest in the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), and some Gaussian priors on h and ΩM. Methods: We performed a statistical analysis and explored the probability distributions of the cosmographic parameters. By building up their regions of confidence, we maximized our likelihood function using the Markov chain Monte Carlo method. Results: Our high-redshift analysis confirms that the expansion of the Universe currently accelerates; the estimation of the jerk parameter indicates a possible deviation from the standard ΛCDM cosmological model. Moreover, we investigate implications of our results for the reconstruction of the dark energy equation of state (EOS) by comparing the standard technique of cosmography with an alternative approach based on generalized Padé approximations of the same observables. Because these expansions converge better, is possible to improve the constraints on the cosmographic parameters and also on the dark matter EOS. Conclusions: The estimation of the jerk and the DE parameters indicates at 1σ a possible deviation from the ΛCDM cosmological model.

  12. Discrete Element Method Modeling of Bedload Transport: Towards a physics-based link between bed surface variability and particle entrainment statistics

    NASA Astrophysics Data System (ADS)

    Ghasemi, A.; Borhani, S.; Viparelli, E.; Hill, K. M.

    2017-12-01

    The Exner equation provides a formal mathematical link between sediment transport and bed morphology. It is typically represented in a discrete formulation where there is a sharp geometric interface between the bedload layer and the bed, below which no particles are entrained. For high temporally and spatially resolved models, this is strictly correct, but typically this is applied in such a way that spatial and temporal fluctuations in the bed surface (bedforms and otherwise) are not captured. This limits the extent to which the exchange between particles in transport and the sediment bed are properly represented, particularly problematic for mixed grain size distributions that exhibit segregation. Nearly two decades ago, Parker (2000) provided a framework for a solution to this dilemma in the form of a probabilistic Exner equation, partially experimentally validated by Wong et al. (2007). We present a computational study designed to develop a physics-based framework for understanding the interplay between physical parameters of the bed and flow and parameters in the Parker (2000) probabilistic formulation. To do so we use Discrete Element Method simulations to relate local time-varying parameters to long-term macroscopic parameters. These include relating local grain size distribution and particle entrainment and deposition rates to long- average bed shear stress and the standard deviation of bed height variations. While relatively simple, these simulations reproduce long-accepted empirically determined transport behaviors such as the Meyer-Peter and Muller (1948) relationship. We also find that these simulations reproduce statistical relationships proposed by Wong et al. (2007) such as a Gaussian distribution of bed heights whose standard deviation increases with increasing bed shear stress. We demonstrate how the ensuing probabilistic formulations provide insight into the transport and deposition of both narrow and wide grain size distribution.

  13. SWIFT-BAT HARD X-RAY SKY MONITORING UNVEILS THE ORBITAL PERIOD OF THE HMXB IGR J18219–1347

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Parola, V.; Cusumano, G.; Segreto, A.

    2013-09-20

    IGR J18219–1347 is a hard X-ray source discovered by INTEGRAL in 2010. We have analyzed the X-ray emission of this source exploiting the Burst Alert Telescope (BAT) survey data up to 2012 March and the X-Ray Telescope (XRT) data that include also an observing campaign performed in early 2012. The source is detected at a significance level of ∼13 standard deviations in the 88 month BAT survey data, and shows a strong variability along the survey monitoring, going from high intensity to quiescent states. A timing analysis on the BAT data revealed an intensity modulation with a period of Pmore » {sub 0} = 72.44 ± 0.3 days. The significance of this modulation is about seven standard deviations in Gaussian statistics. We interpret it as the orbital period of the binary system. The light curve folded at P {sub 0} shows a sharp peak covering ∼30% of the period, superimposed to a flat level roughly consistent with zero. In the soft X-rays the source is detected only in 5 out of 12 XRT observations, with the highest recorded count rate corresponding to a phase close to the BAT folded light-curve peak. The long orbital period and the evidence that the source emits only during a small fraction of the orbit suggests that the IGR J18219–1347 binary system hosts a Be star. The broadband XRT+BAT spectrum is well modeled with a flat absorbed power law with a high-energy exponential cutoff at ∼11 keV.« less

  14. The impact of inter-fraction dose variations on biological equivalent dose (BED): the concept of equivalent constant dose.

    PubMed

    Zavgorodni, S

    2004-12-07

    Inter-fraction dose fluctuations, which appear as a result of setup errors, organ motion and treatment machine output variations, may influence the radiobiological effect of the treatment even when the total delivered physical dose remains constant. The effect of these inter-fraction dose fluctuations on the biological effective dose (BED) has been investigated. Analytical expressions for the BED accounting for the dose fluctuations have been derived. The concept of biological effective constant dose (BECD) has been introduced. The equivalent constant dose (ECD), representing the constant physical dose that provides the same cell survival fraction as the fluctuating dose, has also been introduced. The dose fluctuations with Gaussian as well as exponential probability density functions were investigated. The values of BECD and ECD calculated analytically were compared with those derived from Monte Carlo modelling. The agreement between Monte Carlo modelled and analytical values was excellent (within 1%) for a range of dose standard deviations (0-100% of the dose) and the number of fractions (2 to 37) used in the comparison. The ECDs have also been calculated for conventional radiotherapy fields. The analytical expression for the BECD shows that BECD increases linearly with the variance of the dose. The effect is relatively small, and in the flat regions of the field it results in less than 1% increase of ECD. In the penumbra region of the 6 MV single radiotherapy beam the ECD exceeded the physical dose by up to 35%, when the standard deviation of combined patient setup/organ motion uncertainty was 5 mm. Equivalently, the ECD field was approximately 2 mm wider than the physical dose field. The difference between ECD and the physical dose is greater for normal tissues than for tumours.

  15. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  16. Automated EEG sleep staging in the term-age baby using a generative modelling approach.

    PubMed

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording's feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen's kappa agreement calculated between the estimates and clinicians' visual labels. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.

  17. Automated EEG sleep staging in the term-age baby using a generative modelling approach

    NASA Astrophysics Data System (ADS)

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.

  18. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  19. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  20. Quantitative 3D Ultrashort Time-to-Echo (UTE) MRI and Micro-CT (μCT) Evaluation of the Temporomandibular Joint (TMJ) Condylar Morphology

    PubMed Central

    Geiger, Daniel; Bae, Won C.; Statum, Sheronda; Du, Jiang; Chung, Christine B.

    2014-01-01

    Objective Temporomandibular dysfunction involves osteoarthritis of the TMJ, including degeneration and morphologic changes of the mandibular condyle. Purpose of this study was to determine accuracy of novel 3D-UTE MRI versus micro-CT (μCT) for quantitative evaluation of mandibular condyle morphology. Material & Methods Nine TMJ condyle specimens were harvested from cadavers (2M, 3F; Age 85 ± 10 yrs., mean±SD). 3D-UTE MRI (TR=50ms, TE=0.05 ms, 104 μm isotropic-voxel) was performed using a 3-T MR scanner and μCT (18 μm isotropic-voxel) was performed. MR datasets were spatially-registered with μCT dataset. Two observers segmented bony contours of the condyles. Fibrocartilage was segmented on MR dataset. Using a custom program, bone and fibrocartilage surface coordinates, Gaussian curvature, volume of segmented regions and fibrocartilage thickness were determined for quantitative evaluation of joint morphology. Agreement between techniques (MRI vs. μCT) and observers (MRI vs. MRI) for Gaussian curvature, mean curvature and segmented volume of the bone were determined using intraclass correlation correlation (ICC) analyses. Results Between MRI and μCT, the average deviation of surface coordinates was 0.19±0.15 mm, slightly higher than spatial resolution of MRI. Average deviation of the Gaussian curvature and volume of segmented regions, from MRI to μCT, was 5.7±6.5% and 6.6±6.2%, respectively. ICC coefficients (MRI vs. μCT) for Gaussian curvature, mean curvature and segmented volumes were respectively 0.892, 0.893 and 0.972. Between observers (MRI vs. MRI), the ICC coefficients were 0.998, 0.999 and 0.997 respectively. Fibrocartilage thickness was 0.55±0.11 mm, as previously described in literature for grossly normal TMJ samples. Conclusion 3D-UTE MR quantitative evaluation of TMJ condyle morphology ex-vivo, including surface, curvature and segmented volume, shows high correlation against μCT and between observers. In addition, UTE MRI allows quantitative evaluation of the fibrocartilaginous condylar component. PMID:24092237

  1. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  2. Comparing Standard Deviation Effects across Contexts

    ERIC Educational Resources Information Center

    Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.

    2017-01-01

    Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…

  3. Non-Gaussian bias: insights from discrete density peaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch

    2013-09-01

    Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less

  4. Topology of microwave background fluctuations - Theory

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III; Park, Changbom; Bies, William E.; Bennett, David P.; Juszkiewicz, Roman

    1990-01-01

    Topological measures are used to characterize the microwave background temperature fluctuations produced by 'standard' scenarios (Gaussian) and by cosmic strings (non-Gaussian). Three topological quantities: total area of the excursion regions, total length, and total curvature (genus) of the isotemperature contours, are studied for simulated Gaussian microwave background anisotropy maps and then compared with those of the non-Gaussian anisotropy pattern produced by cosmic strings. In general, the temperature gradient field shows the non-Gaussian behavior of the string map more distinctively than the temperature field for all topology measures. The total contour length and the genus are found to be more sensitive to the existence of a stringy pattern than the usual temperature histogram. Situations when instrumental noise is superposed on the map, are considered to find the critical signal-to-noise ratio for which strings can be detected.

  5. Blood pressure variability in man: its relation to high blood pressure, age and baroreflex sensitivity.

    PubMed

    Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A

    1980-12-01

    1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.

  6. Perturbation theory for BAO reconstructed fields: One-loop results in the real-space matter density field

    NASA Astrophysics Data System (ADS)

    Hikage, Chiaki; Koyama, Kazuya; Heavens, Alan

    2017-08-01

    We compute the power spectrum at one-loop order in standard perturbation theory for the matter density field to which a standard Lagrangian baryonic acoustic oscillation (BAO) reconstruction technique is applied. The BAO reconstruction method corrects the bulk motion associated with the gravitational evolution using the inverse Zel'dovich approximation (ZA) for the smoothed density field. We find that the overall amplitude of one-loop contributions in the matter power spectrum substantially decreases after reconstruction. The reconstructed power spectrum thereby approaches the initial linear spectrum when the smoothed density field is close enough to linear, i.e., the smoothing scale Rs≳10 h-1 Mpc . On smaller Rs, however, the deviation from the linear spectrum becomes significant on large scales (k ≲Rs-1 ) due to the nonlinearity in the smoothed density field, and the reconstruction is inaccurate. Compared with N-body simulations, we show that the reconstructed power spectrum at one-loop order agrees with simulations better than the unreconstructed power spectrum. We also calculate the tree-level bispectrum in standard perturbation theory to investigate non-Gaussianity in the reconstructed matter density field. We show that the amplitude of the bispectrum significantly decreases for small k after reconstruction and that the tree-level bispectrum agrees well with N-body results in the weakly nonlinear regime.

  7. Quantitative analysis of Ni2+/Ni3+ in Li[NixMnyCoz]O2 cathode materials: Non-linear least-squares fitting of XPS spectra

    NASA Astrophysics Data System (ADS)

    Fu, Zewei; Hu, Juntao; Hu, Wenlong; Yang, Shiyu; Luo, Yunfeng

    2018-05-01

    Quantitative analysis of Ni2+/Ni3+ using X-ray photoelectron spectroscopy (XPS) is important for evaluating the crystal structure and electrochemical performance of Lithium-nickel-cobalt-manganese oxide (Li[NixMnyCoz]O2, NMC). However, quantitative analysis based on Gaussian/Lorentzian (G/L) peak fitting suffers from the challenges of reproducibility and effectiveness. In this study, the Ni2+ and Ni3+ standard samples and a series of NMC samples with different Ni doping levels were synthesized. The Ni2+/Ni3+ ratios in NMC were quantitatively analyzed by non-linear least-squares fitting (NLLSF). Two Ni 2p overall spectra of synthesized Li [Ni0.33Mn0.33Co0.33]O2(NMC111) and bulk LiNiO2 were used as the Ni2+ and Ni3+ reference standards. Compared to G/L peak fitting, the fitting parameters required no adjustment, meaning that the spectral fitting process was free from operator dependence and the reproducibility was improved. Comparison of residual standard deviation (STD) showed that the fitting quality of NLLSF was superior to that of G/L peaks fitting. Overall, these findings confirmed the reproducibility and effectiveness of the NLLSF method in XPS quantitative analysis of Ni2+/Ni3+ ratio in Li[NixMnyCoz]O2 cathode materials.

  8. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  9. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  10. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-11-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  11. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-09-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  12. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  13. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    PubMed

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  14. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  15. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    NASA Astrophysics Data System (ADS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  16. Effect of Coulomb friction on orientational correlation and velocity distribution functions in a sheared dilute granular gas.

    PubMed

    Gayen, Bishakhdatta; Alam, Meheboob

    2011-08-01

    From particle simulations of a sheared frictional granular gas, we show that the Coulomb friction can have dramatic effects on orientational correlation as well as on both the translational and angular velocity distribution functions even in the Boltzmann (dilute) limit. The dependence of orientational correlation on friction coefficient (μ) is found to be nonmonotonic, and the Coulomb friction plays a dual role of enhancing or diminishing the orientational correlation, depending on the value of the tangential restitution coefficient (which characterizes the roughness of particles). From the sticking limit (i.e., with no sliding contact) of rough particles, decreasing the Coulomb friction is found to reduce the density and spatial velocity correlations which, together with diminished orientational correlation for small enough μ, are responsible for the transition from non-gaussian to gaussian distribution functions in the double limit of small friction (μ→0) and nearly elastic particles (e→1). This double limit in fact corresponds to perfectly smooth particles, and hence the maxwellian (gaussian) is indeed a solution of the Boltzmann equation for a frictional granular gas in the limit of elastic collisions and zero Coulomb friction at any roughness. The high-velocity tails of both distribution functions seem to follow stretched exponentials even in the presence of Coulomb friction, and the related velocity exponents deviate strongly from a gaussian with increasing friction.

  17. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-15

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simplemore » Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.« less

  18. a Gaussian Process Based Multi-Person Interaction Model

    NASA Astrophysics Data System (ADS)

    Klinger, T.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    Online multi-person tracking in image sequences is commonly guided by recursive filters, whose predictive models define the expected positions of future states. When a predictive model deviates too much from the true motion of a pedestrian, which is often the case in crowded scenes due to unpredicted accelerations, the data association is prone to fail. In this paper we propose a novel predictive model on the basis of Gaussian Process Regression. The model takes into account the motion of every tracked pedestrian in the scene and the prediction is executed with respect to the velocities of all interrelated persons. As shown by the experiments, the model is capable of yielding more plausible predictions even in the presence of mutual occlusions or missing measurements. The approach is evaluated on a publicly available benchmark and outperforms other state-of-the-art trackers.

  19. Orbital angular momentum mode of Gaussian beam induced by atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Cheng, Mingjian; Guo, Lixin; Li, Jiangting; Yan, Xu; Dong, Kangjun

    2018-02-01

    Superposition theory of the spiral harmonics is employed to numerical study the transmission property of the orbital angular momentum (OAM) mode of Gaussian beam induced by atmospheric turbulence. Results show that Gauss beam does not carry OAM at the source, but various OAM modes appear after affected by atmospheric turbulence. With the increase of atmospheric turbulence strength, the smaller order OAM modes appear firstly, followed by larger order OAM modes. The beam spreading of Gauss beams in the atmosphere enhance with the increasing topological charge of the OAM modes caused by atmospheric turbulence. The mode probability density of the OAM generated by atmospheric turbulence decreases, and peak position gradually deviate from the Gauss beam spot center with the increase of the topological charge. Our results may be useful for improving the performance of long distance laser digital spiral imaging system.

  20. Distinguishing response conflict and task conflict in the Stroop task: evidence from ex-Gaussian distribution analysis.

    PubMed

    Steinhauser, Marco; Hübner, Ronald

    2009-10-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  1. Topics in Gravitation and Cosmology

    NASA Astrophysics Data System (ADS)

    Bahrami Taghanaki, Sina

    This thesis is focused on two topics in which relativistic gravitational fields play an important role, namely early Universe cosmology and black hole physics. The theory of cosmic inflation has emerged as the most successful theory of the very early Universe with concrete and verifiable predictions for the properties of anisotropies of the cosmic microwave background radiation and large scale structure. Coalescences of black hole binaries have recently been detected by the Laser Interferometer Gravitational Wave Observatory (LIGO), opening a new arena for observationally testing the dynamics of gravity. In part I of this thesis we explore some modifications to the standard theory of inflation. The main predictions of single field slow-roll inflation have been largely consistent with cosmological observations. However, there remain some aspects of the theory that are not presently well understood. Among these are the somewhat interrelated issues of the choice of initial state for perturbations and the potential imprints of pre-inflationary dynamics. It is well known that a key prediction of the standard theory of inflation, namely the Gaussianity of perturbations, is a consequence of choosing a natural vacuum initial state. In chapter 3, we study the generation and detectability of non-Gaussianities in inflationary scalar perturbations that originate from more general choices of initial state. After that, in chapter 4, we study a simple but predictive model of pre-inflationary dynamics in an attempt to test the robustness of inflationary predictions. We find that significant deviations from the standard predictions are unlikely to result from models in which the inflaton field decouples from the pre-inflationary degrees of freedom prior to freeze-out of the observable modes. In part II we turn to a study of an aspect of the thermodynamics of black holes, a subject which has led to important advances in our understanding of quantum gravity. For objects which collapse to form black holes, we examine a conjectured relationship between the objects' entropy, the collapse timescale, and the mass of the final black hole. This relationship is relevant for understanding the nature of generic quantum mechanical states of black hole interiors. In chapter 6 we construct a counter-example to a weak version of the conjectured relation.

  2. Pedagogical introduction to the entropy of entanglement for Gaussian states

    NASA Astrophysics Data System (ADS)

    Demarie, Tommaso F.

    2018-05-01

    In quantum information theory, the entropy of entanglement is a standard measure of bipartite entanglement between two partitions of a composite system. For a particular class of continuous variable quantum states, the Gaussian states, the entropy of entanglement can be expressed elegantly in terms of symplectic eigenvalues, elements that characterise a Gaussian state and depend on the correlations of the canonical variables. We give a rigorous step-by-step derivation of this result and provide physical insights, together with an example that can be useful in practice for calculations.

  3. On the Use of a Mixed Gaussian/Finite-Element Basis Set for the Calculation of Rydberg States

    NASA Technical Reports Server (NTRS)

    Thuemmel, Helmar T.; Langhoff, Stephen (Technical Monitor)

    1996-01-01

    Configuration-interaction studies are reported for the Rydberg states of the helium atom using mixed Gaussian/finite-element (GTO/FE) one particle basis sets. Standard Gaussian valence basis sets are employed, like those, used extensively in quantum chemistry calculations. It is shown that the term values for high-lying Rydberg states of the helium atom can be obtained accurately (within 1 cm -1), even for a small GTO set, by augmenting the n-particle space with configurations, where orthonormalized interpolation polynomials are singly occupied.

  4. Non-Gaussian Distributions Affect Identification of Expression Patterns, Functional Annotation, and Prospective Classification in Human Cancer Genomes

    PubMed Central

    Marko, Nicholas F.; Weil, Robert J.

    2012-01-01

    Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863

  5. Exploring Students' Conceptions of the Standard Deviation

    ERIC Educational Resources Information Center

    delMas, Robert; Liu, Yan

    2005-01-01

    This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…

  6. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  7. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  8. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  9. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  10. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    ERIC Educational Resources Information Center

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  11. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  12. 7 CFR 801.6 - Tolerances for moisture meters.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...

  13. Conditional and unconditional Gaussian quantum dynamics

    NASA Astrophysics Data System (ADS)

    Genoni, Marco G.; Lami, Ludovico; Serafini, Alessio

    2016-07-01

    This article focuses on the general theory of open quantum systems in the Gaussian regime and explores a number of diverse ramifications and consequences of the theory. We shall first introduce the Gaussian framework in its full generality, including a classification of Gaussian (also known as 'general-dyne') quantum measurements. In doing so, we will give a compact proof for the parametrisation of the most general Gaussian completely positive map, which we believe to be missing in the existing literature. We will then move on to consider the linear coupling with a white noise bath, and derive the diffusion equations that describe the evolution of Gaussian states under such circumstances. Starting from these equations, we outline a constructive method to derive general master equations that apply outside the Gaussian regime. Next, we include the general-dyne monitoring of the environmental degrees of freedom and recover the Riccati equation for the conditional evolution of Gaussian states. Our derivation relies exclusively on the standard quantum mechanical update of the system state, through the evaluation of Gaussian overlaps. The parametrisation of the conditional dynamics we obtain is novel and, at variance with existing alternatives, directly ties in to physical detection schemes. We conclude our study with two examples of conditional dynamics that can be dealt with conveniently through our formalism, demonstrating how monitoring can suppress the noise in optical parametric processes as well as stabilise systems subject to diffusive scattering.

  14. An anisotropic diffusion method for denoising dynamic susceptibility contrast-enhanced magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki; Kawakami, Kazunori; Kikuchi, Keiichi; Miki, Hitoshi; Mochizuki, Teruhito; Ikezoe, Junpei

    2001-10-01

    The purpose of this study was to present an application of a novel denoising technique for improving the accuracy of cerebral blood flow (CBF) images generated from dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI). The method presented in this study was based on anisotropic diffusion (AD). The usefulness of this method was firstly investigated using computer simulations. We applied this method to patient data acquired using a 1.5 T MR system. After a bolus injection of Gd-DTPA, we obtained 40-50 dynamic images with a 1.32-2.08 s time resolution in 4-6 slices. The dynamic images were processed using the AD method, and then the CBF images were generated using pixel-by-pixel deconvolution analysis. For comparison, the CBF images were also generated with or without processing the dynamic images using a median or Gaussian filter. In simulation studies, the standard deviation of the CBF values obtained after processing by the AD method was smaller than that of the CBF values obtained without any processing, while the mean value agreed well with the true CBF value. Although the median and Gaussian filters also reduced image noise, the mean CBF values were considerably underestimated compared with the true values. Clinical studies also suggested that the AD method was capable of reducing the image noise while preserving the quantitative accuracy of CBF images. In conclusion, the AD method appears useful for denoising DSC-MRI, which will make the CBF images generated from DSC-MRI more reliable.

  15. A Model of Self-Monitoring Blood Glucose Measurement Error.

    PubMed

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  16. Modeling the Test-Retest Statistics of a Localization Experiment in the Full Horizontal Plane.

    PubMed

    Morsnowski, André; Maune, Steffen

    2016-10-01

    Two approaches to model the test-retest statistics of a localization experiment basing on Gaussian distribution and on surrogate data are introduced. Their efficiency is investigated using different measures describing directional hearing ability. A localization experiment in the full horizontal plane is a challenging task for hearing impaired patients. In clinical routine, we use this experiment to evaluate the progress of our cochlear implant (CI) recipients. Listening and time effort limit the reproducibility. The localization experiment consists of a 12 loudspeaker circle, placed in an anechoic room, a "camera silens". In darkness, HSM sentences are presented at 65 dB pseudo-erratically from all 12 directions with five repetitions. This experiment is modeled by a set of Gaussian distributions with different standard deviations added to a perfect estimator, as well as by surrogate data. Five repetitions per direction are used to produce surrogate data distributions for the sensation directions. To investigate the statistics, we retrospectively use the data of 33 CI patients with 92 pairs of test-retest-measurements from the same day. The first model does not take inversions into account, (i.e., permutations of the direction from back to front and vice versa are not considered), although they are common for hearing impaired persons particularly in the rear hemisphere. The second model considers these inversions but does not work with all measures. The introduced models successfully describe test-retest statistics of directional hearing. However, since their applications on the investigated measures perform differently no general recommendation can be provided. The presented test-retest statistics enable pair test comparisons for localization experiments.

  17. MAGNETIC FIELD STRENGTH FLUCTUATIONS IN THE HELIOSHEATH: VOYAGER 1 OBSERVATIONS DURING 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burlaga, L. F.; Ness, N. F., E-mail: lburlagahsp@verizon.net, E-mail: nfnudel@yahoo.com

    2012-01-01

    We analyze the ''microscale fluctuations'' of the magnetic field strength B on a scale of several hours observed by Voyager1 (V1) in the heliosheath during 2009. The microscale fluctuations of B range from coherent to stochastic structures. The amplitude of microscale fluctuations of B during 1 day is measured by the standard deviation (SD) of 48 s averages of B. The distribution of the daily values of SD is lognormal. SD(t) from day of year (DOY) 1 to 331, 2009, is very intermittent. SD(t) has a 1/f or 'pink noise' spectrum on scales from 1 to 100 days, and itmore » has a broad multifractal spectrum f({alpha}) with 0.57 {<=} {alpha} {<=} 1.39. The time series of increments SD(t + {tau}) - SD(t) has a pink noise spectrum with {alpha}' = 0.88 {+-} 0.14 on scales from 1 to 100 days. The increments have a Tsallis (q-Gaussian) distribution on scales from 1 to 165 days, with an average q = 1.75 {+-} 0.12. The skewness S and kurtosis K have Gaussian and lognormal distributions, respectively. The largest spikes in K(t) and S(t) are often associated with a change in B across a data gap and with identifiable physical structures. The 'turbulence' observed by V1 during 2009 was weakly compressible on average but still very intermittent, highly variable, and highly compressible at times. The turbulence observed just behind the termination shock by Voyager 2 was twice as strong. These observations place strong constraints on any model of 'turbulence' in the heliosheath.« less

  18. Age-dependent biochemical quantities: an approach for calculating reference intervals.

    PubMed

    Bjerner, J

    2007-01-01

    A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.

  19. Accurate Drift Time Determination by Traveling Wave Ion Mobility Spectrometry: The Concept of the Diffusion Calibration.

    PubMed

    Kune, Christopher; Far, Johann; De Pauw, Edwin

    2016-12-06

    Ion mobility spectrometry (IMS) is a gas phase separation technique, which relies on differences in collision cross section (CCS) of ions. Ionic clouds of unresolved conformers overlap if the CCS difference is below the instrumental resolution expressed as CCS/ΔCCS. The experimental arrival time distribution (ATD) peak is then a superimposition of the various contributions weighted by their relative intensities. This paper introduces a strategy for accurate drift time determination using traveling wave ion mobility spectrometry (TWIMS) of poorly resolved or unresolved conformers. This method implements through a calibration procedure the link between the peak full width at half-maximum (fwhm) and the drift time of model compounds for wide range of settings for wave heights and velocities. We modified a Gaussian equation, which achieves the deconvolution of ATD peaks where the fwhm is fixed according to our calibration procedure. The new fitting Gaussian equation only depends on two parameters: The apex of the peak (A) and the mean drift time value (μ). The standard deviation parameter (correlated to fwhm) becomes a function of the drift time. This correlation function between μ and fwhm is obtained using the TWIMS calibration procedure which determines the maximum instrumental ion beam diffusion under limited and controlled space charge effect using ionic compounds which are detected as single conformers in the gas phase. This deconvolution process has been used to highlight the presence of poorly resolved conformers of crown ether complexes and peptides leading to more accurate CCS determinations in better agreement with quantum chemistry predictions.

  20. Evidence for higher reaction time variability for children with ADHD on a range of cognitive tasks including reward and event rate manipulations

    PubMed Central

    Epstein, Jeffery N.; Langberg, Joshua M.; Rosen, Paul J.; Graham, Amanda; Narad, Megan E.; Antonini, Tanya N.; Brinkman, William B.; Froehlich, Tanya; Simon, John O.; Altaye, Mekibib

    2012-01-01

    Objective The purpose of the research study was to examine the manifestation of variability in reaction times (RT) in children with Attention Deficit Hyperactivity Disorder (ADHD) and to examine whether RT variability presented differently across a variety of neuropsychological tasks, was present across the two most common ADHD subtypes, and whether it was affected by reward and event rate (ER) manipulations. Method Children with ADHD-Combined Type (n=51), ADHD-Predominantly Inattentive Type (n=53) and 47 controls completed five neuropsychological tasks (Choice Discrimination Task, Child Attentional Network Task, Go/No-Go task, Stop Signal Task, and N-back task), each allowing trial-by-trial assessment of reaction times. Multiple indicators of RT variability including RT standard deviation, coefficient of variation and ex-Gaussian tau were used. Results Children with ADHD demonstrated greater RT variability than controls across all five tasks as measured by the ex-Gaussian indicator tau. There were minimal differences in RT variability across the ADHD subtypes. Children with ADHD also had poorer task accuracy than controls across all tasks except the Choice Discrimination task. Although ER and reward manipulations did affect children’s RT variability and task accuracy, these manipulations largely did not differentially affect children with ADHD compared to controls. RT variability and task accuracy were highly correlated across tasks. Removing variance attributable to RT variability from task accuracy did not appreciably affect between-group differences in task accuracy. Conclusions High RT variability is a ubiquitous and robust phenomenon in children with ADHD. PMID:21463041

  1. X-ray diffraction analysis of residual stresses in textured ZnO thin films

    NASA Astrophysics Data System (ADS)

    Dobročka, E.; Novák, P.; Búc, D.; Harmatha, L.; Murín, J.

    2017-02-01

    Residual stresses are commonly generated in thin films during the deposition process and can influence the film properties. Among a number of techniques developed for stress analysis, X-ray diffraction methods, especially the grazing incidence set-up, are of special importance due to their capability to analyze the stresses in very thin layers as well as to investigate the depth variation of the stresses. In this contribution a method combining multiple {hkl} and multiple χ modes of X-ray diffraction stress analysis in grazing incidence set-up is used for the measurement of residual stress in strongly textured ZnO thin films. The method improves the precision of the stress evaluation in textured samples. Because the measurements are performed at very low incidence angles, the effect of refraction of X-rays on the measured stress is analyzed in details for the general case of non-coplanar geometry. It is shown that this effect cannot be neglected if the angle of incidence approaches the critical angle. The X-ray stress factors are calculated for hexagonal fiber-textured ZnO for the Reuss model of grain-interaction and the effect of texture on the stress factors is analyzed. The texture in the layer is modelled by Gaussian distribution function. Numerical results indicate that in the process of stress evaluation the Reuss model can be replaced by much simpler crystallite group method if the standard deviation of Gaussian describing the texture is less than 6°. The results can be adapted for fiber-textured films of various hexagonal materials.

  2. Statistical analysis of Geopotential Height (GH) timeseries based on Tsallis non-extensive statistical mechanics

    NASA Astrophysics Data System (ADS)

    Karakatsanis, L. P.; Iliopoulos, A. C.; Pavlos, E. G.; Pavlos, G. P.

    2018-02-01

    In this paper, we perform statistical analysis of time series deriving from Earth's climate. The time series are concerned with Geopotential Height (GH) and correspond to temporal and spatial components of the global distribution of month average values, during the period (1948-2012). The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis' q-triplet, namely {qstat, qsens, qrel}, the reconstructed phase space and the estimation of correlation dimension and the Hurst exponent of rescaled range analysis (R/S). The deviation of Tsallis q-triplet from unity indicates non-Gaussian (Tsallis q-Gaussian) non-extensive character with heavy tails probability density functions (PDFs), multifractal behavior and long range dependences for all timeseries considered. Also noticeable differences of the q-triplet estimation found in the timeseries at distinct local or temporal regions. Moreover, in the reconstructive phase space revealed a lower-dimensional fractal set in the GH dynamical phase space (strong self-organization) and the estimation of Hurst exponent indicated multifractality, non-Gaussianity and persistence. The analysis is giving significant information identifying and characterizing the dynamical characteristics of the earth's climate.

  3. Real-Time Noise Reduction for Mossbauer Spectroscopy through Online Implementation of a Modified Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrecht, David G.; Schwantes, Jon M.; Kukkadapu, Ravi K.

    2015-02-01

    Spectrum-processing software that incorporates a gaussian smoothing kernel within the statistics of first-order Kalman filtration has been developed to provide cross-channel spectral noise reduction for increased real-time signal-to-noise ratios for Mossbauer spectroscopy. The filter was optimized for the breadth of the gaussian using the Mossbauer spectrum of natural iron foil, and comparisons between the peak broadening, signal-to-noise ratios, and shifts in the calculated hyperfine parameters are presented. The results of optimization give a maximum improvement in the signal-to-noise ratio of 51.1% over the unfiltered spectrum at a gaussian breadth of 27 channels, or 2.5% of the total spectrum width. Themore » full-width half-maximum of the spectrum peaks showed an increase of 19.6% at this optimum point, indicating a relatively weak increase in the peak broadening relative to the signal enhancement, leading to an overall increase in the observable signal. Calculations of the hyperfine parameters showed no statistically significant deviations were introduced from the application of the filter, confirming the utility of this filter for spectroscopy applications.« less

  4. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  5. Multi-scale radiomic analysis of sub-cortical regions in MRI related to autism, gender and age

    NASA Astrophysics Data System (ADS)

    Chaddad, Ahmad; Desrosiers, Christian; Toews, Matthew

    2017-03-01

    We propose using multi-scale image textures to investigate links between neuroanatomical regions and clinical variables in MRI. Texture features are derived at multiple scales of resolution based on the Laplacian-of-Gaussian (LoG) filter. Three quantifier functions (Average, Standard Deviation and Entropy) are used to summarize texture statistics within standard, automatically segmented neuroanatomical regions. Significance tests are performed to identify regional texture differences between ASD vs. TDC and male vs. female groups, as well as correlations with age (corrected p < 0.05). The open-access brain imaging data exchange (ABIDE) brain MRI dataset is used to evaluate texture features derived from 31 brain regions from 1112 subjects including 573 typically developing control (TDC, 99 females, 474 males) and 539 Autism spectrum disorder (ASD, 65 female and 474 male) subjects. Statistically significant texture differences between ASD vs. TDC groups are identified asymmetrically in the right hippocampus, left choroid-plexus and corpus callosum (CC), and symmetrically in the cerebellar white matter. Sex-related texture differences in TDC subjects are found in primarily in the left amygdala, left cerebellar white matter, and brain stem. Correlations between age and texture in TDC subjects are found in the thalamus-proper, caudate and pallidum, most exhibiting bilateral symmetry.

  6. Visualizing the Sample Standard Deviation

    ERIC Educational Resources Information Center

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  7. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  8. Fresnel diffraction by spherical obstacles

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.

    1989-01-01

    Lommel functions were used to solve the Fresnel-Kirchhoff diffraction integral for the case of a spherical obstacle. Comparisons were made between Fresnel diffraction theory and Mie scattering theory. Fresnel theory is then compared to experimental data. Experiment and theory typically deviated from one another by less than 10 percent. A unique experimental setup using mercury spheres suspended in a viscous fluid significantly reduced optical noise. The major source of error was due to the Gaussian-shaped laser beam.

  9. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    PubMed

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  10. Synchronous characterization of semiconductor microcavity laser beam.

    PubMed

    Wang, T; Lippi, G L

    2015-06-01

    We report on a high-resolution double-channel imaging method used to synchronously map the intensity- and optical-frequency-distribution of a laser beam in the plane orthogonal to the propagation direction. The synchronous measurement allows us to show that the laser frequency is an inhomogeneous distribution below threshold, but that it becomes homogeneous across the fundamental Gaussian mode above threshold. The beam's tails deviations from the Gaussian shape, however, are accompanied by sizeable fluctuations in the laser wavelength, possibly deriving from manufacturing details and from the influence of spontaneous emission in the very low intensity wings. In addition to the synchronous spatial characterization, a temporal analysis at any given point in the beam cross section is carried out. Using this method, the beam homogeneity and spatial shape, energy density, energy center, and the defects-related spectrum can also be extracted from these high-resolution pictures.

  11. Directionality volatility in electroencephalogram time series

    NASA Astrophysics Data System (ADS)

    Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.

    2016-06-01

    We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.

  12. Photogrammetry: An available surface characterization tool for solar concentrators. Part 2: Assessment of surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shortis, M.; Johnston, G.

    1997-11-01

    In a previous paper, the results of photogrammetric measurements of a number of paraboloidal reflecting surfaces were presented. These results showed that photogrammetry can provide three-dimensional surface characterizations of such solar concentrators. The present paper describes the assessment of the quality of these surfaces as a derivation of the photogrammetrically produced surface coordinates. Statistical analysis of the z-coordinate distribution of errors indicates that these generally conform to a univariate Gaussian distribution, while the numerical assessment of the surface normal vectors on these surfaces indicates that the surface normal deviations appear to follow an approximately bivariate Gaussian distribution. Ray tracing ofmore » the measured surfaces to predict the expected flux distribution at the focal point of the 400 m{sup 2} dish show a close correlation with the videographically measured flux distribution at the focal point of the dish.« less

  13. IBS for non-gaussian distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedotov, A.; Sidorin, A.O.; Smirnov, A.V.

    In many situations distribution can significantly deviate from Gaussian which requires accurate treatment of IBS. Our original interest in this problem was motivated by the need to have an accurate description of beam evolution due to IBS while distribution is strongly affected by the external electron cooling force. A variety of models with various degrees of approximation were developed and implemented in BETACOOL in the past to address this topic. A more complete treatment based on the friction coefficient and full 3-D diffusion tensor was introduced in BETACOOL at the end of 2007 under the name 'local IBS model'. Suchmore » a model allowed us calculation of IBS for an arbitrary beam distribution. The numerical benchmarking of this local IBS algorithm and its comparison with other models was reported before. In this paper, after briefly describing the model and its limitations, they present its comparison with available experimental data.« less

  14. Fast evaluation of solid harmonic Gaussian integrals for local resolution-of-the-identity methods and range-separated hybrid functionals.

    PubMed

    Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg

    2017-01-21

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.

  15. Fast evaluation of solid harmonic Gaussian integrals for local resolution-of-the-identity methods and range-separated hybrid functionals

    NASA Astrophysics Data System (ADS)

    Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg

    2017-01-01

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.

  16. Down-Looking Interferometer Study II, Volume I,

    DTIC Science & Technology

    1980-03-01

    g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system

  17. 40 CFR 61.207 - Radium-226 sampling and measurement procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...

  18. Verification of unfold error estimates in the unfold operator code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less

  19. An efficient method to determine double Gaussian fluence parameters in the eclipse™ proton pencil beam model.

    PubMed

    Shen, Jiajian; Liu, Wei; Stoker, Joshua; Ding, Xiaoning; Anand, Aman; Hu, Yanle; Herman, Michael G; Bues, Martin

    2016-12-01

    To find an efficient method to configure the proton fluence for a commercial proton pencil beam scanning (PBS) treatment planning system (TPS). An in-water dose kernel was developed to mimic the dose kernel of the pencil beam convolution superposition algorithm, which is part of the commercial proton beam therapy planning software, eclipse™ (Varian Medical Systems, Palo Alto, CA). The field size factor (FSF) was calculated based on the spot profile reconstructed by the in-house dose kernel. The workflow of using FSFs to find the desirable proton fluence is presented. The in-house derived spot profile and FSF were validated by a direct comparison with those calculated by the eclipse TPS. The validation included 420 comparisons of the FSFs from 14 proton energies, various field sizes from 2 to 20 cm and various depths from 20% to 80% of proton range. The relative in-water lateral profiles between the in-house calculation and the eclipse TPS agree very well even at the level of 10 -4 . The FSFs between the in-house calculation and the eclipse TPS also agree well. The maximum deviation is within 0.5%, and the standard deviation is less than 0.1%. The authors' method significantly reduced the time to find the desirable proton fluences of the clinical energies. The method is extensively validated and can be applied to any proton centers using PBS and the eclipse TPS.

  20. Flexner 2.0-Longitudinal Study of Student Participation in a Campus-Wide General Pathology Course for Graduate Students at The University of Arizona.

    PubMed

    Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S

    2016-01-01

    Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.

  1. Flexner 2.0—Longitudinal Study of Student Participation in a Campus-Wide General Pathology Course for Graduate Students at The University of Arizona

    PubMed Central

    Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.

    2016-01-01

    Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783

  2. Renyi entropy measures of heart rate Gaussianity.

    PubMed

    Lake, Douglas E

    2006-01-01

    Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.

  3. Dynamics of a Landau-Zener non-dissipative system with fluctuating energy levels

    NASA Astrophysics Data System (ADS)

    Fai, L. C.; Diffo, J. T.; Ateuafack, M. E.; Tchoffo, M.; Fouokeng, G. C.

    2014-12-01

    This paper considers a Landau-Zener (two-level) system influenced by a three-dimensional Gaussian and non-Gaussian coloured noise and finds a general form of the time dependent diabatic quantum bit (qubit) flip transition probabilities in the fast, intermediate and slow noise limits. The qubit flip probability is observed to mimic (for low-frequencies noise) that of the standard LZ problem. The qubit flip probability is also observed to be the measure of quantum coherence of states. The transition probability is observed to be tailored by non-Gaussian low-frequency noise and otherwise by Gaussian low-frequency coloured noise. Intermediate and fast noise limits are observed to alter the memory of the system in time and found to improve and control quantum information processing.

  4. Steering of Frequency Standards by the Use of Linear Quadratic Gaussian Control Theory

    NASA Technical Reports Server (NTRS)

    Koppang, Paul; Leland, Robert

    1996-01-01

    Linear quadratic Gaussian control is a technique that uses Kalman filtering to estimate a state vector used for input into a control calculation. A control correction is calculated by minimizing a quadratic cost function that is dependent on both the state vector and the control amount. Different penalties, chosen by the designer, are assessed by the controller as the state vector and control amount vary from given optimal values. With this feature controllers can be designed to force the phase and frequency differences between two standards to zero either more or less aggressively depending on the application. Data will be used to show how using different parameters in the cost function analysis affects the steering and the stability of the frequency standards.

  5. Characterizing CDOM Spectral Variability Across Diverse Regions and Spectral Ranges

    NASA Astrophysics Data System (ADS)

    Grunert, Brice K.; Mouw, Colleen B.; Ciochetto, Audrey B.

    2018-01-01

    Satellite remote sensing of colored dissolved organic matter (CDOM) has focused on CDOM absorption (aCDOM) at a reference wavelength, as its magnitude provides insight into the underwater light field and large-scale biogeochemical processes. CDOM spectral slope, SCDOM, has been treated as a constant or semiconstant parameter in satellite retrievals of aCDOM despite significant regional and temporal variabilities. SCDOM and other optical metrics provide insights into CDOM composition, processing, food web dynamics, and carbon cycling. To date, much of this work relies on fluorescence techniques or aCDOM in spectral ranges unavailable to current and planned satellite sensors (e.g., <300 nm). In preparation for anticipated future hyperspectral satellite missions, we take the first step here of exploring global variability in SCDOM and fit deviations in the aCDOM spectra using the recently proposed Gaussian decomposition method. From this, we investigate if global variability in retrieved SCDOM and Gaussian components is significant and regionally distinct. We iteratively decreased the spectral range considered and analyzed the number, location, and magnitude of fitted Gaussian components to understand if a reduced spectral range impacts information obtained within a common spectral window. We compared the fitted slope from the Gaussian decomposition method to absorption-based indices that indicate CDOM composition to determine the ability of satellite-derived slope to inform the analysis and modeling of large-scale biogeochemical processes. Finally, we present implications of the observed variability for remote sensing of CDOM characteristics via SCDOM.

  6. Analysis of multidimensional difference-of-Gaussians filters in terms of directly observable parameters.

    PubMed

    Cope, Davis; Blakeslee, Barbara; McCourt, Mark E

    2013-05-01

    The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.

  7. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  8. Characterizing optical properties and spatial heterogeneity of human ovarian tissue using spatial frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Nandy, Sreyankar; Mostafa, Atahar; Kumavor, Patrick D.; Sanders, Melinda; Brewer, Molly; Zhu, Quing

    2016-10-01

    A spatial frequency domain imaging (SFDI) system was developed for characterizing ex vivo human ovarian tissue using wide-field absorption and scattering properties and their spatial heterogeneities. Based on the observed differences between absorption and scattering images of different ovarian tissue groups, six parameters were quantitatively extracted. These are the mean absorption and scattering, spatial heterogeneities of both absorption and scattering maps measured by a standard deviation, and a fitting error of a Gaussian model fitted to normalized mean Radon transform of the absorption and scattering maps. A logistic regression model was used for classification of malignant and normal ovarian tissues. A sensitivity of 95%, specificity of 100%, and area under the curve of 0.98 were obtained using six parameters extracted from the SFDI images. The preliminary results demonstrate the diagnostic potential of the SFDI method for quantitative characterization of wide-field optical properties and the spatial distribution heterogeneity of human ovarian tissue. SFDI could be an extremely robust and valuable tool for evaluation of the ovary and detection of neoplastic changes of ovarian cancer.

  9. Measurement of Vibrations in Two Tower-Typed Assistant Personal Robot Implementations with and without a Passive Suspension System

    PubMed Central

    Moreno, Javier; Clotet, Eduard; Tresanchez, Marcel; Martínez, Dani; Casanovas, Jordi; Palacín, Jordi

    2017-01-01

    This paper presents the vibration pattern measurement of two tower-typed holonomic mobile robot prototypes: one based on a rigid mechanical structure, and the other including a passive suspension system. Specific to the tower-typed mobile robots is that the vibrations that originate in the lower part of the structure are transmitted and amplified to the higher areas of the tower, causing an unpleasant visual effect and mechanical stress. This paper assesses the use of a suspension system aimed at minimizing the generation and propagation of vibrations in the upper part of the tower-typed holonomic robots. The two robots analyzed were equipped with onboard accelerometers to register the acceleration over the X, Y, and Z axes in different locations and at different velocities. In all the experiments, the amplitude of the vibrations showed a typical Gaussian pattern which has been modeled with the value of the standard deviation. The results have shown that the measured vibrations in the head of the mobile robots, including a passive suspension system, were reduced by a factor of 16. PMID:28505108

  10. Rank Diversity of Languages: Generic Behavior in Computational Linguistics

    PubMed Central

    Cocho, Germinal; Flores, Jorge; Gershenson, Carlos; Pineda, Carlos; Sánchez, Sergio

    2015-01-01

    Statistical studies of languages have focused on the rank-frequency distribution of words. Instead, we introduce here a measure of how word ranks change in time and call this distribution rank diversity. We calculate this diversity for books published in six European languages since 1800, and find that it follows a universal lognormal distribution. Based on the mean and standard deviation associated with the lognormal distribution, we define three different word regimes of languages: “heads” consist of words which almost do not change their rank in time, “bodies” are words of general use, while “tails” are comprised by context-specific words and vary their rank considerably in time. The heads and bodies reflect the size of language cores identified by linguists for basic communication. We propose a Gaussian random walk model which reproduces the rank variation of words in time and thus the diversity. Rank diversity of words can be understood as the result of random variations in rank, where the size of the variation depends on the rank itself. We find that the core size is similar for all languages studied. PMID:25849150

  11. MreB is important for cell shape but not for chromosome segregation of the filamentous cyanobacterium Anabaena sp. PCC 7120.

    PubMed

    Hu, Bin; Yang, Guohua; Zhao, Weixing; Zhang, Yingjiao; Zhao, Jindong

    2007-03-01

    MreB is a bacterial actin that plays important roles in determination of cell shape and chromosome partitioning in Escherichia coli and Caulobacter crescentus. In this study, the mreB from the filamentous cyanobacterium Anabaena sp. PCC 7120 was inactivated. Although the mreB null mutant showed a drastic change in cell shape, its growth rate, cell division and the filament length were unaltered. Thus, MreB in Anabaena maintains cell shape but is not required for chromosome partitioning. The wild type and the mutant had eight and 10 copies of chromosomes per cell respectively. We demonstrated that DNA content in two daughter cells after cell division in both strains was not always identical. The ratios of DNA content in two daughter cells had a Gaussian distribution with a standard deviation much larger than a value expected if the DNA content in two daughter cells were identical, suggesting that chromosome partitioning is a random process. The multiple copies of chromosomes in cyanobacteria are likely required for chromosome random partitioning in cell division.

  12. Rank diversity of languages: generic behavior in computational linguistics.

    PubMed

    Cocho, Germinal; Flores, Jorge; Gershenson, Carlos; Pineda, Carlos; Sánchez, Sergio

    2015-01-01

    Statistical studies of languages have focused on the rank-frequency distribution of words. Instead, we introduce here a measure of how word ranks change in time and call this distribution rank diversity. We calculate this diversity for books published in six European languages since 1800, and find that it follows a universal lognormal distribution. Based on the mean and standard deviation associated with the lognormal distribution, we define three different word regimes of languages: "heads" consist of words which almost do not change their rank in time, "bodies" are words of general use, while "tails" are comprised by context-specific words and vary their rank considerably in time. The heads and bodies reflect the size of language cores identified by linguists for basic communication. We propose a Gaussian random walk model which reproduces the rank variation of words in time and thus the diversity. Rank diversity of words can be understood as the result of random variations in rank, where the size of the variation depends on the rank itself. We find that the core size is similar for all languages studied.

  13. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  14. Near-surface wind speed statistical distribution: comparison between ECMWF System 4 and ERA-Interim

    NASA Astrophysics Data System (ADS)

    Marcos, Raül; Gonzalez-Reviriego, Nube; Torralba, Verónica; Cortesi, Nicola; Young, Doo; Doblas-Reyes, Francisco J.

    2017-04-01

    In the framework of seasonal forecast verification, knowing whether the characteristics of the climatological wind speed distribution, simulated by the forecasting systems, are similar to the observed ones is essential to guide the subsequent process of bias adjustment. To bring some light about this topic, this work assesses the properties of the statistical distributions of 10m wind speed from both ERA-Interim reanalysis and seasonal forecasts of ECMWF system 4. The 10m wind speed distribution has been characterized in terms of the four main moments of the probability distribution (mean, standard deviation, skewness and kurtosis) together with the coefficient of variation and goodness of fit Shapiro-Wilks test, allowing the identification of regions with higher wind variability and non-Gaussian behaviour at monthly time-scales. Also, the comparison of the predicted and observed 10m wind speed distributions has been measured considering both inter-annual and intra-seasonal variability. Such a comparison is important in both climate research and climate services communities because it provides useful climate information for decision-making processes and wind industry applications.

  15. Shot-Noise Limited Single-Molecule FRET Histograms: Comparison between Theory and Experiments†

    PubMed Central

    Nir, Eyal; Michalet, Xavier; Hamadani, Kambiz M.; Laurence, Ted A.; Neuhauser, Daniel; Kovchegov, Yevgeniy; Weiss, Shimon

    2011-01-01

    We describe a simple approach and present a straightforward numerical algorithm to compute the best fit shot-noise limited proximity ratio histogram (PRH) in single-molecule fluorescence resonant energy transfer diffusion experiments. The key ingredient is the use of the experimental burst size distribution, as obtained after burst search through the photon data streams. We show how the use of an alternated laser excitation scheme and a correspondingly optimized burst search algorithm eliminates several potential artifacts affecting the calculation of the best fit shot-noise limited PRH. This algorithm is tested extensively on simulations and simple experimental systems. We find that dsDNA data exhibit a wider PRH than expected from shot noise only and hypothetically account for it by assuming a small Gaussian distribution of distances with an average standard deviation of 1.6 Å. Finally, we briefly mention the results of a future publication and illustrate them with a simple two-state model system (DNA hairpin), for which the kinetic transition rates between the open and closed conformations are extracted. PMID:17078646

  16. Limits of detection and decision. Part 4

    NASA Astrophysics Data System (ADS)

    Voigtman, E.

    2008-02-01

    Probability density functions (PDFs) have been derived for a number of commonly used limit of detection definitions, including several variants of the Relative Standard Deviation of the Background-Background Equivalent Concentration (RSDB-BEC) method, for a simple linear chemical measurement system (CMS) having homoscedastic, Gaussian measurement noise and using ordinary least squares (OLS) processing. All of these detection limit definitions serve as both decision and detection limits, thereby implicitly resulting in 50% rates of Type 2 errors. It has been demonstrated that these are closely related to Currie decision limits, if the coverage factor, k, is properly defined, and that all of the PDFs are scaled reciprocals of noncentral t variates. All of the detection limits have well-defined upper and lower limits, thereby resulting in finite moments and confidence limits, and the problem of estimating the noncentrality parameter has been addressed. As in Parts 1-3, extensive Monte Carlo simulations were performed and all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Specific recommendations for harmonization of detection limit methodology have also been made.

  17. The CERN Large Hadron Collider as a tool to study high-energy density matter.

    PubMed

    Tahir, N A; Kain, V; Schmidt, R; Shutov, A; Lomonosov, I V; Gryaznov, V; Piriz, A R; Temporal, M; Hoffmann, D H H; Fortov, V E

    2005-04-08

    The Large Hadron Collider (LHC) at CERN will generate two extremely powerful 7 TeV proton beams. Each beam will consist of 2808 bunches with an intensity per bunch of 1.15x10(11) protons so that the total number of protons in one beam will be about 3x10(14) and the total energy will be 362 MJ. Each bunch will have a duration of 0.5 ns and two successive bunches will be separated by 25 ns, while the power distribution in the radial direction will be Gaussian with a standard deviation, sigma=0.2 mm. The total duration of the beam will be about 89 mus. Using a 2D hydrodynamic code, we have carried out numerical simulations of the thermodynamic and hydrodynamic response of a solid copper target that is irradiated with one of the LHC beams. These calculations show that only the first few hundred proton bunches will deposit a high specific energy of 400 kJ/g that will induce exotic states of high energy density in matter.

  18. A Note on Standard Deviation and Standard Error

    ERIC Educational Resources Information Center

    Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth

    2010-01-01

    Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.

  19. Fisher information as a generalized measure of coherence in classical and quantum optics.

    PubMed

    Luis, Alfredo

    2012-10-22

    We show that metrological resolution in the detection of small phase shifts provides a suitable generalization of the degrees of coherence and polarization. Resolution is estimated via Fisher information. Besides the standard two-beam Gaussian case, this approach provides also good results for multiple field components and nonGaussian statistics. This works equally well in quantum and classical optics.

  20. fNL‑gNL mixing in the matter density field at higher orders

    NASA Astrophysics Data System (ADS)

    Gressel, Hedda A.; Bruni, Marco

    2018-06-01

    In this paper we examine how primordial non-Gaussianity contributes to nonlinear perturbative orders in the expansion of the density field at large scales in the matter dominated era. General Relativity is an intrinsically nonlinear theory, establishing a nonlinear relation between the metric and the density field. Representing the metric perturbations with the curvature perturbation ζ, it is known that nonlinearity produces effective non-Gaussian terms in the nonlinear perturbations of the matter density field δ, even if the primordial ζ is Gaussian. Here we generalise these results to the case of a non-Gaussian primordial ζ. Using a standard parametrization of primordial non-Gaussianity in ζ in terms of fNL, gNL, hNL\\ldots , we show how at higher order (from third and higher) nonlinearity also produces a mixing of these contributions to the density field at large scales, e.g. both fNL and gNL contribute to the third order in δ. This is the main result of this paper. Our analysis is based on the synergy between a gradient expansion (aka long-wavelength approximation) and standard perturbation theory at higher order. In essence, mathematically the equations for the gradient expansion are equivalent to those of first order perturbation theory, thus first-order results convert into gradient expansion results and, vice versa, the gradient expansion can be used to derive results in perturbation theory at higher order and large scales.

  1. Statistical studies of animal response data from USF toxicity screening test method

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Machado, A. M.

    1978-01-01

    Statistical examination of animal response data obtained using Procedure B of the USF toxicity screening test method indicates that the data deviate only slightly from a normal or Gaussian distribution. This slight departure from normality is not expected to invalidate conclusions based on theoretical statistics. Comparison of times to staggering, convulsions, collapse, and death as endpoints shows that time to death appears to be the most reliable endpoint because it offers the lowest probability of missed observations and premature judgements.

  2. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  3. How Many Subjects are Needed for a Visual Field Normative Database? A Comparison of Ground Truth and Bootstrapped Statistics.

    PubMed

    Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K

    2018-03-01

    The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.

  4. 14 CFR Appendix C to Part 91 - Operations in the North Atlantic (NAT) Minimum Navigation Performance Specifications (MNPS) Airspace

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...

  5. Blur-resistant Perimetric Stimuli

    PubMed Central

    Horner, Douglas G.; Dul, Mitchell W.; Swanson, William H.; Liu, Tiffany; Tran, Irene

    2013-01-01

    Purpose To develop perimetric stimuli which are resistant to the effects of peripheral defocus. Methods One eye each was tested on subjects free of eye disease. Experiment 1 assessed spatial frequency, testing 12 subjects at eccentricities from 2° to 7°, using blur levels from 0 D to 3 D for two (Gabor) stimuli (spatial standard deviation (SD) = 0.5°, spatial frequencies of 0.5 and 1.0 cpd). Experiment 2 assessed stimulus size, testing 12 subjects at eccentricities from 4° to 7°, using blur levels 0 D to 6 D, for two Gaussians with SDs of 0.5° and 0.25° and a 0.5 cpd Gabor with SD of 0.5°. Experiment 3 tested 13 subjects at eccentricities from fixation to 27°, using blur levels 0 D to 6 D, for Gabor stimuli at 56 locations; the spatial frequency ranged from 0.14 to 0.50 cpd with location, and SD was scaled accordingly. Results In experiment 1, blur by 3 D caused a small decline in log contrast sensitivity (CS) for the 0.5 cpd stimulus (mean ± SE = −0.09 ± 0.08 log unit) and a larger (t = 7.7, p <0.0001) decline for the 1.0 cpd stimulus (0.37 ± 0.13 log unit). In experiment 2, blur by 6 D caused minimal decline for the larger Gaussian, by −0.17 ± 0.16 log unit, and larger (t >4.5, p < 0.001) declines for the smaller Gaussian (−0.33 ± 0.16 log unit) and the Gabor (−0.36 ± 0.18 log unit). In experiment 3, blur by 6 D caused declines by 0.27 ± 0.05 log unit for eccentricities from 0° to 10°, by 0.20 ± 0.04 log unit for eccentricities from 10° to 20° and 0.13 ± 0.03 log unit for eccentricities from 20°–27°. Conclusions Experiments 1 & 2 allowed us to design stimuli for Experiment 3 that were resistant to effects of peripheral defocus. PMID:23584488

  6. Repeatable source, site, and path effects on the standard deviation for empirical ground-motion prediction models

    USGS Publications Warehouse

    Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.

    2011-01-01

    In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.

  7. On Gaussian feedback capacity

    NASA Technical Reports Server (NTRS)

    Dembo, Amir

    1989-01-01

    Pinsker and Ebert (1970) proved that in channels with additive Gaussian noise, feedback at most doubles the capacity. Cover and Pombra (1989) proved that feedback at most adds half a bit per transmission. Following their approach, the author proves that in the limit as signal power approaches either zero (very low SNR) or infinity (very high SNR), feedback does not increase the finite block-length capacity (which for nonstationary Gaussian channels replaces the standard notion of capacity that may not exist). Tighter upper bounds on the capacity are obtained in the process. Specializing these results to stationary channels, the author recovers some of the bounds recently obtained by Ozarow.

  8. A better norm-referenced grading using the standard deviation criterion.

    PubMed

    Chan, Wing-shing

    2014-01-01

    The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.

  9. Personal Background Preparation Survey for early identification of nursing students at risk for attrition.

    PubMed

    Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C

    2009-11-01

    During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.

  10. Demonstration of the Gore Module for Passive Ground Water Sampling

    DTIC Science & Technology

    2014-06-01

    ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70

  11. Impact of baseline systolic blood pressure on visit-to-visit blood pressure variability: the Kailuan study.

    PubMed

    Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling

    2016-01-01

    To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.

  12. A non-Gaussian option pricing model based on Kaniadakis exponential deformation

    NASA Astrophysics Data System (ADS)

    Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara

    2017-09-01

    A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.

  13. Flexner 3.0-Democratization of Medical Knowledge for the 21st Century: Teaching Medical Science Using K-12 General Pathology as a Gateway Course.

    PubMed

    Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J

    2016-01-01

    A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."

  14. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    PubMed

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.

  15. Flexner 3.0—Democratization of Medical Knowledge for the 21st Century

    PubMed Central

    Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.

    2016-01-01

    A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762

  16. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  17. Optimum threshold selection method of centroid computation for Gaussian spot

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; Wang, Caixia

    2015-10-01

    Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.

  18. High Precision Edge Detection Algorithm for Mechanical Parts

    NASA Astrophysics Data System (ADS)

    Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui

    2018-04-01

    High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  19. EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter

    PubMed Central

    Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.

    2012-01-01

    A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018

  20. Barrier height inhomogeneity in electrical transport characteristics of InGaN/GaN heterostructure interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roul, Basanta; Central Research Laboratory, Bharat Electronics, Bangalore 560013; Mukundan, Shruti

    2015-03-15

    We have grown InGaN/GaN heterostructures using plasma-assisted molecular beam epitaxy and studied the temperature dependent electrical transport characteristics. The barrier height (φ{sub b}) and the ideally factor (η) estimated using thermionic emission model were found to be temperature dependent. The conventional Richardson plot of ln(J{sub s}/T{sup 2}) versus 1/kT showed two temperature regions (region-I: 400–500 K and region-II: 200–350 K) and it provides Richardson constants (A{sup ∗}) which are much lower than the theoretical value of GaN. The observed variation in the barrier height and the presence of two temperature regions were attributed to spatial barrier inhomogeneities at the heterojunctionmore » interface and was explained by assuming a double Gaussian distribution of barrier heights with mean barrier height values 1.61 and 1.21 eV with standard deviation (σ{sub s}{sup 2}) of 0.044 and 0.022 V, respectively. The modified Richardson plot of ln(J{sub s}/T{sup 2}) − (q{sup 2}σ{sub s}{sup 2}/2k{sup 2}T{sup 2}) versus 1/kT for two temperature regions gave mean barrier height values as 1.61 eV and 1.22 eV with Richardson constants (A{sup ∗}) values 25.5 Acm{sup −2}K{sup −2} and 43.9 Acm{sup −2}K{sup −2}, respectively, which are very close to the theoretical value. The observed barrier height inhomogeneities were interpreted on the basis of the existence of a double Gaussian distribution of barrier heights at the interface.« less

  1. Directionality of Individual Cone Photoreceptors in the Parafoveal Region

    PubMed Central

    Morris, Hugh J.; Blanco, Leonardo; Codona, Johanan L.; Li, Simone; Choi, Stacey S.; Doble, Nathan

    2015-01-01

    The pointing direction of cone photoreceptors can be inferred from the Stiles-Crawford Effect of the First Kind (SCE-I) measurement. Healthy retinas have tightly packed cones with a SCE-I function peak either centered in the pupil or with a slight nasal bias. Various retinal pathologies can change the profile of the SCE-I function implying that the arrangement or the light capturing properties of the cone photoreceptors are affected. Measuring the SCE-I may reveal early signs of photoreceptor change before actual cell apoptosis occurs. In vivo retinal imaging with adaptive optics (AO) was used to measure the pointing direction of individual cones at eight retinal locations in four control human subjects. Retinal images were acquired by translating an aperture in the light delivery arm through 19 different locations across a subject’s entrance pupil. Angular tuning properties of individual cones were calculated by fitting a Gaussian to the reflected intensity profile of each cone projected onto the pupil. Results were compared to those from an accepted psychophysical SCE-I measurement technique. The maximal difference in cone directionality of an ensemble of cones, ρ̄, between the major and minor axes of the Gaussian fit was 0.05 versus 0.29 mm−2 in one subject. All four subjects were found to have a mean nasal bias of 0.81 mm with a standard deviation of ±0.30 mm in the peak position at all retinal locations with mean ρ̄ value decreasing by 23% with increasing retinal eccentricity. Results show that cones in the parafoveal region converge towards the center of the pupillary aperture, confirming the anterior pointing alignment hypothesis. PMID:26494187

  2. A general method for the definition of margin recipes depending on the treatment technique applied in helical tomotherapy prostate plans.

    PubMed

    Sevillano, David; Mínguez, Cristina; Sánchez, Alicia; Sánchez-Reyes, Alberto

    2016-01-01

    To obtain specific margin recipes that take into account the dosimetric characteristics of the treatment plans used in a single institution. We obtained dose-population histograms (DPHs) of 20 helical tomotherapy treatment plans for prostate cancer by simulating the effects of different systematic errors (Σ) and random errors (σ) on these plans. We obtained dosimetric margins and margin reductions due to random errors (random margins) by fitting the theoretical results of coverages for Gaussian distributions with coverages of the planned D99% obtained from the DPHs. The dosimetric margins obtained for helical tomotherapy prostate treatments were 3.3 mm, 3 mm, and 1 mm in the lateral (Lat), anterior-posterior (AP), and superior-inferior (SI) directions. Random margins showed parabolic dependencies, yielding expressions of 0.16σ(2), 0.13σ(2), and 0.15σ(2) for the Lat, AP, and SI directions, respectively. When focusing on values up to σ = 5 mm, random margins could be fitted considering Gaussian penumbras with standard deviations (σp) equal to 4.5 mm Lat, 6 mm AP, and 5.5 mm SI. Despite complex dose distributions in helical tomotherapy treatment plans, we were able to simplify the behaviour of our plans against treatment errors to single values of dosimetric and random margins for each direction. These margins allowed us to develop specific margin recipes for the respective treatment technique. The method is general and could be used for any treatment technique provided that DPHs can be obtained. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Unsupervised Gaussian Mixture-Model With Expectation Maximization for Detecting Glaucomatous Progression in Standard Automated Perimetry Visual Fields.

    PubMed

    Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher

    2016-05-01

    To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM-progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning.

  4. FTIR study of silicon carbide amorphization by heavy ion irradiations

    NASA Astrophysics Data System (ADS)

    Costantini, Jean-Marc; Miro, Sandrine; Pluchery, Olivier

    2017-03-01

    We have measured at room temperature (RT) the Fourier-transform infra-red (FTIR) absorption spectra of ion-irradiated thin epitaxial films of cubic silicon carbide (3C-SiC) with 1.1 µm thickness on a 500 µm thick (1 0 0) silicon wafer substrate. Irradiations were carried out at RT with 2.3 MeV 28Si+ ions and 3.0 MeV 84Kr+ ions for various fluences in order to induce amorphization of the SiC film. Ion projected ranges were adjusted to be slightly larger than the film thickness so that the whole SiC layers were homogeneously damaged. FTIR spectra of virgin and irradiated samples were recorded for various incidence angles from normal incidence to Brewster’s angle. We show that the amorphization process in ion-irradiated 3C-SiC films can be monitored non-destructively by FTIR absorption spectroscopy without any major interference of the substrate. The compared evolutions of TO and LO peaks upon ion irradiation yield valuable information on the damage process. Complementary test experiments were also performed on virgin silicon nitride (Si3N4) self-standing films for similar conditions. Asymmetrical shapes were found for TO peaks of SiC, whereas Gaussian profiles are found for LO peaks. Skewed Gaussian profiles, with a standard deviation depending on wave number, were used to fit asymmetrical peaks for both materials. A new methodology for following the amorphization process is proposed on the basis of the evolution of fitted IR absorption peak parameters with ion fluence. Results are discussed with respect to Rutherford backscattering spectrometry channeling and Raman spectroscopy analysis.

  5. Multiscale approach to contour fitting for MR images

    NASA Astrophysics Data System (ADS)

    Rueckert, Daniel; Burger, Peter

    1996-04-01

    We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.

  6. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    PubMed Central

    Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287

  7. Avoiding drift related to linear analysis update with Lagrangian coordinate models

    NASA Astrophysics Data System (ADS)

    Wang, Yiguo; Counillon, Francois; Bertino, Laurent

    2015-04-01

    When applying data assimilation to Lagrangian coordinate models, it is profitable to correct its grid (position, volume). In isopycnal ocean coordinate model, such information is provided by the layer thickness that can be massless but must remains positive (truncated Gaussian distribution). A linear gaussian analysis does not ensure positivity for such variable. Existing methods have been proposed to handle this issue - e.g. post processing, anamorphosis or resampling - but none ensures conservation of the mean, which is imperative in climate application. Here, a framework is introduced to test a new method, which proceed as following. First, layers for which analysis yields negative values are iteratively grouped with neighboring layers, resulting in a probability density function with a larger mean and smaller standard deviation that prevent appearance of negative values. Second, analysis increments of the grouped layer are uniformly distributed, which prevent massless layers to become filled and vice-versa. The new method is proved fully conservative with e.g. OI or 3DVAR but a small drift remains with ensemble-based methods (e.g. EnKF, DEnKF, …) during the update of the ensemble anomaly. However, the resulting drift with the latter is small (an order of magnitude smaller than with post-processing) and the increase of the computational cost moderate. The new method is demonstrated with a realistic application in the Norwegian Climate Prediction Model (NorCPM) that provides climate prediction by assimilating sea surface temperature with the Ensemble Kalman Filter in a fully coupled Earth System model (NorESM) with an isopycnal ocean model (MICOM). Over 25-year analysis period, the new method does not impair the predictive skill of the system but corrects the artificial steric drift introduced by data assimilation, and provide estimate in good agreement with IPCC AR5.

  8. Increased intra-individual reaction time variability in cocaine-dependent subjects: role of cocaine-related cues.

    PubMed

    Liu, Shijing; Lane, Scott D; Schmitz, Joy M; Green, Charles E; Cunningham, Kathryn A; Moeller, F Gerard

    2012-02-01

    Neuroimaging data suggest that impaired performance on response inhibition and information processing tests in cocaine-dependent subjects is related to prefrontal and frontal cortical dysfunction and that dysfunction in these brain areas may underlie some aspects of cocaine addiction. In subjects with attention-deficit hyperactivity disorder and other psychiatric disorders, the Intra-Individual Reaction Time Variability (IIRTV) has been associated with frontal cortical dysfunction. In the present study, we evaluated IIRTV parameters in cocaine-dependent subjects vs. controls using a cocaine Stroop task. Fifty control and 123 cocaine-dependent subjects compiled from three studies completed a cocaine Stroop task. Standard deviation (SD) and coefficient of variation (CV) for reaction times (RT) were calculated for both trials with neutral and trials with cocaine-related words. The parameters mu, sigma, and tau were calculated using an ex-Gaussian analysis employed to characterize variability in RTs. The ex-Gaussian analysis divides the RTs into normal (mu, sigma) and exponential (tau) components. Using robust regression analysis, cocaine-dependent subjects showed greater SD, CV and Tau on trials with cocaine-related words compared to controls (p<0.05). However, in trials with neutral words, there was no evidence of group differences in any IIRTV parameters (p>0.05). The Wilcoxon matched-pairs signed-rank test showed that for cocaine-dependent subjects, both SD and tau were larger in trials with cocaine-related words than in trials with neutral words (p<0.05). The observation that only cocaine-related words increased IIRTV in cocaine-dependent subjects suggests that cocaine-related stimuli might disrupt information processing subserved by prefrontal and frontal cortical circuits. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. The effects of velocities and lensing on moments of the Hubble diagram

    NASA Astrophysics Data System (ADS)

    Macaulay, E.; Davis, T. M.; Scovacricchi, D.; Bacon, D.; Collett, T.; Nichol, R. C.

    2017-05-01

    We consider the dispersion on the supernova distance-redshift relation due to peculiar velocities and gravitational lensing, and the sensitivity of these effects to the amplitude of the matter power spectrum. We use the Method-of-the-Moments (MeMo) lensing likelihood developed by Quartin et al., which accounts for the characteristic non-Gaussian distribution caused by lensing magnification with measurements of the first four central moments of the distribution of magnitudes. We build on the MeMo likelihood by including the effects of peculiar velocities directly into the model for the moments. In order to measure the moments from sparse numbers of supernovae, we take a new approach using Kernel density estimation to estimate the underlying probability density function of the magnitude residuals. We also describe a bootstrap re-sampling approach to estimate the data covariance matrix. We then apply the method to the joint light-curve analysis (JLA) supernova catalogue. When we impose only that the intrinsic dispersion in magnitudes is independent of redshift, we find σ _8=0.44^{+0.63}_{-0.44} at the one standard deviation level, although we note that in tests on simulations, this model tends to overestimate the magnitude of the intrinsic dispersion, and underestimate σ8. We note that the degeneracy between intrinsic dispersion and the effects of σ8 is more pronounced when lensing and velocity effects are considered simultaneously, due to a cancellation of redshift dependence when both effects are included. Keeping the model of the intrinsic dispersion fixed as a Gaussian distribution of width 0.14 mag, we find σ _8 = 1.07^{+0.50}_{-0.76}.

  10. Material removal characteristics of orthogonal velocity polishing tool for efficient fabrication of CVD SiC mirror surfaces

    NASA Astrophysics Data System (ADS)

    Seo, Hyunju; Han, Jeong-Yeol; Kim, Sug-Whan; Seong, Sehyun; Yoon, Siyoung; Lee, Kyungmook; Lee, Haengbok

    2015-09-01

    Today, CVD SiC mirrors are readily available in the market. However, it is well known to the community that the key surface fabrication processes and, in particular, the material removal characteristics of the CVD SiC mirror surface varies sensitively depending on the shop floor polishing and figuring variables. We investigated the material removal characteristics of CVD SiC mirror surfaces using a new and patented polishing tool called orthogonal velocity tool (OVT) that employs two orthogonal velocity fields generated simultaneously during polishing and figuring machine runs. We built an in-house OVT machine and its operating principle allows for generation of pseudo Gaussian shapes of material removal from the target surface. The shapes are very similar to the tool influence functions (TIFs) of other polishing machine such as IRP series polishing machines from Zeeko. Using two CVD SiC mirrors of 150 mm in diameter and flat surface, we ran trial material removal experiments over the machine run parameter ranges from 12.901 to 25.867 psi in pressure, 0.086 m/sec to 0.147 m/sec in tool linear velocity, and 5 to 15 sec in dwell time. An in-house developed data analysis program was used to obtain a number of Gaussian shaped TIFs and the resulting material removal coefficient varies from 3.35 to 9.46 um/psi hour m/sec with the mean value to 5.90 ± 1.26(standard deviation). We report the technical details of the new OVT machine, of the data analysis program, of the experiments and the results together with the implications to the future development of the OVT machine and process for large CVD SiC mirror surfaces.

  11. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò

    2018-05-01

    Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.

  12. Estimation of the neural drive to the muscle from surface electromyograms

    NASA Astrophysics Data System (ADS)

    Hofmann, David

    Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.

  13. Fractal Analysis of Brain Blood Oxygenation Level Dependent (BOLD) Signals from Children with Mild Traumatic Brain Injury (mTBI).

    PubMed

    Dona, Olga; Noseworthy, Michael D; DeMatteo, Carol; Connolly, John F

    2017-01-01

    Conventional imaging techniques are unable to detect abnormalities in the brain following mild traumatic brain injury (mTBI). Yet patients with mTBI typically show delayed response on neuropsychological evaluation. Because fractal geometry represents complexity, we explored its utility in measuring temporal fluctuations of brain resting state blood oxygen level dependent (rs-BOLD) signal. We hypothesized that there could be a detectable difference in rs-BOLD signal complexity between healthy subjects and mTBI patients based on previous studies that associated reduction in signal complexity with disease. Fifteen subjects (13.4 ± 2.3 y/o) and 56 age-matched (13.5 ± 2.34 y/o) healthy controls were scanned using a GE Discovery MR750 3T MRI and 32-channel RF-coil. Axial FSPGR-3D images were used to prescribe rs-BOLD (TE/TR = 35/2000ms), acquired over 6 minutes. Motion correction was performed and anatomical and functional images were aligned and spatially warped to the N27 standard atlas. Fractal analysis, performed on grey matter, was done by estimating the Hurst exponent using de-trended fluctuation analysis and signal summation conversion methods. Voxel-wise fractal dimension (FD) was calculated for every subject in the control group to generate mean and standard deviation maps for regional Z-score analysis. Voxel-wise validation of FD normality across controls was confirmed, and non-Gaussian voxels (3.05% over the brain) were eliminated from subsequent analysis. For each mTBI patient, regions where Z-score values were at least 2 standard deviations away from the mean (i.e. where |Z| > 2.0) were identified. In individual patients the frequently affected regions were amygdala (p = 0.02), vermis(p = 0.03), caudate head (p = 0.04), hippocampus(p = 0.03), and hypothalamus(p = 0.04), all previously reported as dysfunctional after mTBI, but based on group analysis. It is well known that the brain is best modeled as a complex system. Therefore a measure of complexity using rs-BOLD signal FD could provide an additional method to grade and monitor mTBI. Furthermore, this approach can be personalized thus providing unique patient specific assessment.

  14. Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.

    PubMed

    Bowman, Richard G; Caraway, David; Bentley, Ishmael

    2013-01-01

    Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.

  15. φq-field theory for portfolio optimization: “fat tails” and nonlinear correlations

    NASA Astrophysics Data System (ADS)

    Sornette, D.; Simonetti, P.; Andersen, J. V.

    2000-08-01

    Physics and finance are both fundamentally based on the theory of random walks (and their generalizations to higher dimensions) and on the collective behavior of large numbers of correlated variables. The archetype examplifying this situation in finance is the portfolio optimization problem in which one desires to diversify on a set of possibly dependent assets to optimize the return and minimize the risks. The standard mean-variance solution introduced by Markovitz and its subsequent developments is basically a mean-field Gaussian solution. It has severe limitations for practical applications due to the strongly non-Gaussian structure of distributions and the nonlinear dependence between assets. Here, we present in details a general analytical characterization of the distribution of returns for a portfolio constituted of assets whose returns are described by an arbitrary joint multivariate distribution. In this goal, we introduce a non-linear transformation that maps the returns onto Gaussian variables whose covariance matrix provides a new measure of dependence between the non-normal returns, generalizing the covariance matrix into a nonlinear covariance matrix. This nonlinear covariance matrix is chiseled to the specific fat tail structure of the underlying marginal distributions, thus ensuring stability and good conditioning. The portfolio distribution is then obtained as the solution of a mapping to a so-called φq field theory in particle physics, of which we offer an extensive treatment using Feynman diagrammatic techniques and large deviation theory, that we illustrate in details for multivariate Weibull distributions. The interaction (non-mean field) structure in this field theory is a direct consequence of the non-Gaussian nature of the distribution of asset price returns. We find that minimizing the portfolio variance (i.e. the relatively “small” risks) may often increase the large risks, as measured by higher normalized cumulants. Extensive empirical tests are presented on the foreign exchange market that validate satisfactorily the theory. For “fat tail” distributions, we show that an adequate prediction of the risks of a portfolio relies much more on the correct description of the tail structure rather than on their correlations. For the case of asymmetric return distributions, our theory allows us to generalize the return-risk efficient frontier concept to incorporate the dimensions of large risks embedded in the tail of the asset distributions. We demonstrate that it is often possible to increase the portfolio return while decreasing the large risks as quantified by the fourth and higher-order cumulants. Exact theoretical formulas are validated by empirical tests.

  16. Mechanics-based statistics of failure risk of quasibrittle structures and size effect on safety factors.

    PubMed

    Bazant, Zdenĕk P; Pang, Sze-Dai

    2006-06-20

    In mechanical design as well as protection from various natural hazards, one must ensure an extremely low failure probability such as 10(-6). How to achieve that goal is adequately understood only for the limiting cases of brittle or ductile structures. Here we present a theory to do that for the transitional class of quasibrittle structures, having brittle constituents and characterized by nonnegligible size of material inhomogeneities. We show that the probability distribution of strength of the representative volume element of material is governed by the Maxwell-Boltzmann distribution of atomic energies and the stress dependence of activation energy barriers; that it is statistically modeled by a hierarchy of series and parallel couplings; and that it consists of a broad Gaussian core having a grafted far-left power-law tail with zero threshold and amplitude depending on temperature and load duration. With increasing structure size, the Gaussian core shrinks and Weibull tail expands according to the weakest-link model for a finite chain of representative volume elements. The model captures experimentally observed deviations of the strength distribution from Weibull distribution and of the mean strength scaling law from a power law. These deviations can be exploited for verification and calibration. The proposed theory will increase the safety of concrete structures, composite parts of aircraft or ships, microelectronic components, microelectromechanical systems, prosthetic devices, etc. It also will improve protection against hazards such as landslides, avalanches, ice breaks, and rock or soil failures.

  17. Fission yield calculation using toy model based on Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id; Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221; Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. Theremore » are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90« less

  18. Fission yield calculation using toy model based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jubaidah, Kurniadi, Rizal

    2015-09-01

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  19. Computer Programs for the Semantic Differential: Further Modifications.

    ERIC Educational Resources Information Center

    Lawson, Edwin D.; And Others

    The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…

  20. Analytical probabilistic modeling of RBE-weighted dose for ion therapy.

    PubMed

    Wieser, H P; Hennig, P; Wahl, N; Bangert, M

    2017-11-10

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order [Formula: see text] to [Formula: see text] for the expectation value and from [Formula: see text] to [Formula: see text] for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are [Formula: see text]99.15% for the expectation value and [Formula: see text]94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.

  1. Lunar terrain mapping and relative-roughness analysis

    USGS Publications Warehouse

    Rowan, Lawrence C.; McCauley, John F.; Holm, Esther A.

    1971-01-01

    Terrain maps of the equatorial zone (long 70° E.-70° W. and lat 10° N-10° S.) were prepared at scales of 1:2,000,000 and 1:1,000,000 to classify lunar terrain with respect to roughness and to provide a basis for selecting sites for Surveyor and Apollo landings as well as for Ranger and Lunar Orbiter photographs. The techniques that were developed as a result of this effort can be applied to future planetary exploration. By using the best available earth-based observational data and photographs 1:1,000,000-scale and U.S. Geological Survey lunar geologic maps and U.S. Air Force Aeronautical Chart and Information Center LAC charts, lunar terrain was described by qualitative and quantitative methods and divided into four fundamental classes: maria, terrae, craters, and linear features. Some 35 subdivisions were defined and mapped throughout the equatorial zone, and, in addition, most of the map units were illustrated by photographs. The terrain types were analyzed quantitatively to characterize and order their relative-roughness characteristics. Approximately 150,000 east-west slope measurements made by a photometric technique (photoclinometry) in 51 sample areas indicate that algebraic slope-frequency distributions are Gaussian, and so arithmetic means and standard deviations accurately describe the distribution functions. The algebraic slope-component frequency distributions are particularly useful for rapidly determining relative roughness of terrain. The statistical parameters that best describe relative roughness are the absolute arithmetic mean, the algebraic standard deviation, and the percentage of slope reversal. Statistically derived relative-relief parameters are desirable supplementary measures of relative roughness in the terrae. Extrapolation of relative roughness for the maria was demonstrated using Ranger VII slope-component data and regional maria slope data, as well as the data reported here. It appears that, for some morphologically homogeneous mare areas, relative roughness can be extrapolated to the large scales from measurements at small scales.

  2. PTV margin determination in conformal SRT of intracranial lesions

    PubMed Central

    Parker, Brent C.; Shiu, Almon S.; Maor, Moshe H.; Lang, Frederick F.; Liu, H. Helen; White, R. Allen; Antolak, John A.

    2002-01-01

    The planning target volume (PTV) includes the clinical target volume (CTV) to be irradiated and a margin to account for uncertainties in the treatment process. Uncertainties in miniature multileaf collimator (mMLC) leaf positioning, CT scanner spatial localization, CT‐MRI image fusion spatial localization, and Gill‐Thomas‐Cosman (GTC) relocatable head frame repositioning were quantified for the purpose of determining a minimum PTV margin that still delivers a satisfactory CTV dose. The measured uncertainties were then incorporated into a simple Monte Carlo calculation for evaluation of various margin and fraction combinations. Satisfactory CTV dosimetric criteria were selected to be a minimum CTV dose of 95% of the PTV dose and at least 95% of the CTV receiving 100% of the PTV dose. The measured uncertainties were assumed to be Gaussian distributions. Systematic errors were added linearly and random errors were added in quadrature assuming no correlation to arrive at the total combined error. The Monte Carlo simulation written for this work examined the distribution of cumulative dose volume histograms for a large patient population using various margin and fraction combinations to determine the smallest margin required to meet the established criteria. The program examined 5 and 30 fraction treatments, since those are the only fractionation schemes currently used at our institution. The fractionation schemes were evaluated using no margin, a margin of just the systematic component of the total uncertainty, and a margin of the systematic component plus one standard deviation of the total uncertainty. It was concluded that (i) a margin of the systematic error plus one standard deviation of the total uncertainty is the smallest PTV margin necessary to achieve the established CTV dose criteria, and (ii) it is necessary to determine the uncertainties introduced by the specific equipment and procedures used at each institution since the uncertainties may vary among locations. PACS number(s): 87.53.Kn, 87.53.Ly PMID:12132939

  3. Analytical probabilistic modeling of RBE-weighted dose for ion therapy

    NASA Astrophysics Data System (ADS)

    Wieser, H. P.; Hennig, P.; Wahl, N.; Bangert, M.

    2017-12-01

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order O(V × B^2) to O(V × B) for the expectation value and from O(V × B^4) to O(V × B^2) for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are > 99.15% for the expectation value and > 94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.

  4. Perception of force and stiffness in the presence of low-frequency haptic noise

    PubMed Central

    Gurari, Netta; Okamura, Allison M.; Kuchenbecker, Katherine J.

    2017-01-01

    Objective This work lays the foundation for future research on quantitative modeling of human stiffness perception. Our goal was to develop a method by which a human’s ability to perceive suprathreshold haptic force stimuli and haptic stiffness stimuli can be affected by adding haptic noise. Methods Five human participants performed a same-different task with a one-degree-of-freedom force-feedback device. Participants used the right index finger to actively interact with variations of force (∼5 and ∼8 N) and stiffness (∼290 N/m) stimuli that included one of four scaled amounts of haptically rendered noise (None, Low, Medium, High). The haptic noise was zero-mean Gaussian white noise that was low-pass filtered with a 2 Hz cut-off frequency; the resulting low-frequency signal was added to the force rendered while the participant interacted with the force and stiffness stimuli. Results We found that the precision with which participants could identify the magnitude of both the force and stiffness stimuli was affected by the magnitude of the low-frequency haptically rendered noise added to the haptic stimulus, as well as the magnitude of the haptic stimulus itself. The Weber fraction strongly correlated with the standard deviation of the low-frequency haptic noise with a Pearson product-moment correlation coefficient of ρ > 0.83. The mean standard deviation of the low-frequency haptic noise in the haptic stimuli ranged from 0.184 N to 1.111 N across the four haptically rendered noise levels, and the corresponding mean Weber fractions spanned between 0.042 and 0.101. Conclusions The human ability to perceive both suprathreshold haptic force and stiffness stimuli degrades in the presence of added low-frequency haptic noise. Future work can use the reported methods to investigate how force perception and stiffness perception may relate, with possible applications in haptic watermarking and in the assessment of the functionality of peripheral pathways in individuals with haptic impairments. PMID:28575068

  5. Herschel Survey of Galactic OH+, H2O+, and H3O+: Probing the Molecular Hydrogen Fraction and Cosmic-Ray Ionization Rate

    NASA Astrophysics Data System (ADS)

    Indriolo, Nick; Neufeld, D. A.; Gerin, M.; Schilke, P.; Benz, A. O.; Winkel, B.; Menten, K. M.; Chambers, E. T.; Black, John H.; Bruderer, S.; Falgarone, E.; Godard, B.; Goicoechea, J. R.; Gupta, H.; Lis, D. C.; Ossenkopf, V.; Persson, C. M.; Sonnentrucker, P.; van der Tak, F. F. S.; van Dishoeck, E. F.; Wolfire, Mark G.; Wyrowski, F.

    2015-02-01

    In diffuse interstellar clouds the chemistry that leads to the formation of the oxygen-bearing ions OH+, H2O+, and H3O+ begins with the ionization of atomic hydrogen by cosmic rays, and continues through subsequent hydrogen abstraction reactions involving H2. Given these reaction pathways, the observed abundances of these molecules are useful in constraining both the total cosmic-ray ionization rate of atomic hydrogen (ζH) and molecular hydrogen fraction (f_H_2). We present observations targeting transitions of OH+, H2O+, and H3O+ made with the Herschel Space Observatory along 20 Galactic sight lines toward bright submillimeter continuum sources. Both OH+ and H2O+ are detected in absorption in multiple velocity components along every sight line, but H3O+ is only detected along 7 sight lines. From the molecular abundances we compute f_H_2 in multiple distinct components along each line of sight, and find a Gaussian distribution with mean and standard deviation 0.042 ± 0.018. This confirms previous findings that OH+ and H2O+ primarily reside in gas with low H2 fractions. We also infer ζH throughout our sample, and find a lognormal distribution with mean log (ζH) = -15.75 (ζH = 1.78 × 10-16 s-1) and standard deviation 0.29 for gas within the Galactic disk, but outside of the Galactic center. This is in good agreement with the mean and distribution of cosmic-ray ionization rates previously inferred from H_3^+ observations. Ionization rates in the Galactic center tend to be 10-100 times larger than found in the Galactic disk, also in accord with prior studies. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  6. Consequences of environmental and biological variances for range margins: a spatially explicit theoretical model

    NASA Astrophysics Data System (ADS)

    Malanson, G. P.; DeRose, R. J.; Bekker, M. F.

    2016-12-01

    The consequences of increasing climatic variance while including variability among individuals and populations are explored for range margins of species with a spatially explicit simulation. The model has a single environmental gradient and a single species then extended to two species. Species response to the environment is a Gaussian function with a peak of 1.0 at their peak fitness on the gradient. The variance in the environment is taken from the total variance in the tree ring series of 399 individuals of Pinus edulis in FIA plots in the western USA. The variability is increased by a multiplier of the standard deviation for various doubling times. The variance of individuals in the simulation is drawn from these same series. Inheritance of individual variability is based on the geographic locations of the individuals. The variance for P. edulis is recomputed as time-dependent conditional standard deviations using the GARCH procedure. Establishment and mortality are simulated in a Monte Carlo process with individual variance. Variance for P. edulis does not show a consistent pattern of heteroscedasticity. An obvious result is that increasing variance has deleterious effects on species persistence because extreme events that result in extinctions cannot be balanced by positive anomalies, but even less extreme negative events cannot be balanced by positive anomalies because of biological and spatial constraints. In the two species model the superior competitor is more affected by increasing climatic variance because its response function is steeper at the point of intersection with the other species and so the uncompensated effects of negative anomalies are greater for it. These theoretical results can guide the anticipated need to mitigate the effects of increasing climatic variability on P. edulis range margins. The trailing edge, here subject to increasing drought stress with increasing temperatures, will be more affected by negative anomalies.

  7. Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2006-01-01

    A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.

  8. Disappearance of Anisotropic Intermittency in Large-amplitude MHD Turbulence and Its Comparison with Small-amplitude MHD Turbulence

    NASA Astrophysics Data System (ADS)

    Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua

    2018-03-01

    Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.

  9. The influence of non-Gaussian distribution functions on the time-dependent perpendicular transport of energetic particles

    NASA Astrophysics Data System (ADS)

    Lasuik, J.; Shalchi, A.

    2018-06-01

    In the current paper we explore the influence of the assumed particle statistics on the transport of energetic particles across a mean magnetic field. In previous work the assumption of a Gaussian distribution function was standard, although there have been known cases for which the transport is non-Gaussian. In the present work we combine a kappa distribution with the ordinary differential equation provided by the so-called unified non-linear transport theory. We then compute running perpendicular diffusion coefficients for different values of κ and turbulence configurations. We show that changing the parameter κ slightly increases or decreases the perpendicular diffusion coefficient depending on the considered turbulence configuration. Since these changes are small, we conclude that the assumed statistics is less significant in particle transport theory. The results obtained in the current paper support to use a Gaussian distribution function as usually done in particle transport theory.

  10. Comparison of non-Gaussian and Gaussian diffusion models of diffusion weighted imaging of rectal cancer at 3.0 T MRI.

    PubMed

    Zhang, Guangwen; Wang, Shuangshuang; Wen, Didi; Zhang, Jing; Wei, Xiaocheng; Ma, Wanling; Zhao, Weiwei; Wang, Mian; Wu, Guosheng; Zhang, Jinsong

    2016-12-09

    Water molecular diffusion in vivo tissue is much more complicated. We aimed to compare non-Gaussian diffusion models of diffusion-weighted imaging (DWI) including intra-voxel incoherent motion (IVIM), stretched-exponential model (SEM) and Gaussian diffusion model at 3.0 T MRI in patients with rectal cancer, and to determine the optimal model for investigating the water diffusion properties and characterization of rectal carcinoma. Fifty-nine consecutive patients with pathologically confirmed rectal adenocarcinoma underwent DWI with 16 b-values at a 3.0 T MRI system. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models (IVIM-mono, IVIM-bi and SEM) on primary tumor and adjacent normal rectal tissue. Parameters of standard apparent diffusion coefficient (ADC), slow- and fast-ADC, fraction of fast ADC (f), α value and distributed diffusion coefficient (DDC) were generated and compared between the tumor and normal tissues. The SEM exhibited the best fitting results of actual DWI signal in rectal cancer and the normal rectal wall (R 2  = 0.998, 0.999 respectively). The DDC achieved relatively high area under the curve (AUC = 0.980) in differentiating tumor from normal rectal wall. Non-Gaussian diffusion models could assess tissue properties more accurately than the ADC derived Gaussian diffusion model. SEM may be used as a potential optimal model for characterization of rectal cancer.

  11. Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.

    2015-12-01

    Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.

  12. SU-E-J-188: Theoretical Estimation of Margin Necessary for Markerless Motion Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, R; Block, A; Harkenrider, M

    2015-06-15

    Purpose: To estimate the margin necessary to adequately cover the target using markerless motion tracking (MMT) of lung lesions given the uncertainty in tracking and the size of the target. Methods: Simulations were developed in Matlab to determine the effect of tumor size and tracking uncertainty on the margin necessary to achieve adequate coverage of the target. For simplicity, the lung tumor was approximated by a circle on a 2D radiograph. The tumor was varied in size from a diameter of 0.1 − 30 mm in increments of 0.1 mm. From our previous studies using dual energy markerless motion tracking,more » we estimated tracking uncertainties in x and y to have a standard deviation of 2 mm. A Gaussian was used to simulate the deviation between the tracked location and true target location. For each size tumor, 100,000 deviations were randomly generated, the margin necessary to achieve at least 95% coverage 95% of the time was recorded. Additional simulations were run for varying uncertainties to demonstrate the effect of the tracking accuracy on the margin size. Results: The simulations showed an inverse relationship between tumor size and margin necessary to achieve 95% coverage 95% of the time using the MMT technique. The margin decreased exponentially with target size. An increase in tracking accuracy expectedly showed a decrease in margin size as well. Conclusion: In our clinic a 5 mm expansion of the internal target volume (ITV) is used to define the planning target volume (PTV). These simulations show that for tracking accuracies in x and y better than 2 mm, the margin required is less than 5 mm. This simple simulation can provide physicians with a guideline estimation for the margin necessary for use of MMT clinically based on the accuracy of their tracking and the size of the tumor.« less

  13. Analysis of the observed and intrinsic durations of Swift/BAT gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Tarnopolski, Mariusz

    2016-07-01

    The duration distribution of 947 GRBs observed by Swift/BAT, as well as its subsample of 347 events with measured redshift, allowing to examine the durations in both the observer and rest frames, are examined. Using a maximum log-likelihood method, mixtures of two and three standard Gaussians are fitted to each sample, and the adequate model is chosen based on the value of the difference in the log-likelihoods, Akaike information criterion and Bayesian information criterion. It is found that a two-Gaussian is a better description than a three-Gaussian, and that the presumed intermediate-duration class is unlikely to be present in the Swift duration data.

  14. Sparse covariance estimation in heterogeneous samples*

    PubMed Central

    Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian

    2015-01-01

    Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189

  15. High power infrared super-Gaussian beams: generation, propagation, and application

    NASA Astrophysics Data System (ADS)

    du Preez, Neil C.; Forbes, Andrew; Botha, Lourens R.

    2008-10-01

    In this paper we present the design of a CO2 laser resonator that produces as the stable transverse mode a super-Gaussian laser beam. The resonator makes use of an intra-cavity diffractive mirror and a flat output coupler, generating the desired intensity profile at the output coupler with a flat wavefront. We consider the modal build-up in such a resonator and show that such a resonator mode has the ability to extract more energy from the cavity that a standard cavity single mode beam (e.g., Gaussian mode cavity). We demonstrate the design experimentally on a high average power TEA CO2 laser for paint stripping applications.

  16. Optical pathology of human brain metastasis of lung cancer using combined resonance Raman and spatial frequency spectroscopies

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Liu, Cheng-hui; Pu, Yang; Cheng, Gangge; Zhou, Lixin; Chen, Jun; Zhu, Ke; Alfano, Robert R.

    2016-03-01

    Raman spectroscopy has become widely used for diagnostic purpose of breast, lung and brain cancers. This report introduced a new approach based on spatial frequency spectra analysis of the underlying tissue structure at different stages of brain tumor. Combined spatial frequency spectroscopy (SFS), Resonance Raman (RR) spectroscopic method is used to discriminate human brain metastasis of lung cancer from normal tissues for the first time. A total number of thirty-one label-free micrographic images of normal and metastatic brain cancer tissues obtained from a confocal micro- Raman spectroscopic system synchronously with examined RR spectra of the corresponding samples were collected from the identical site of tissue. The difference of the randomness of tissue structures between the micrograph images of metastatic brain tumor tissues and normal tissues can be recognized by analyzing spatial frequency. By fitting the distribution of the spatial frequency spectra of human brain tissues as a Gaussian function, the standard deviation, σ, can be obtained, which was used to generate a criterion to differentiate human brain cancerous tissues from the normal ones using Support Vector Machine (SVM) classifier. This SFS-SVM analysis on micrograph images presents good results with sensitivity (85%), specificity (75%) in comparison with gold standard reports of pathology and immunology. The dual-modal advantages of SFS combined with RR spectroscopy method may open a new way in the neuropathology applications.

  17. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  18. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    PubMed Central

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  19. Partial-Wave Representations of Laser Beams for Use in Light-Scattering Calculations

    NASA Technical Reports Server (NTRS)

    Gouesbet, Gerard; Lock, James A.; Grehan, Gerard

    1995-01-01

    In the framework of generalized Lorenz-Mie theory, laser beams are described by sets of beam-shape coefficients. The modified localized approximation to evaluate these coefficients for a focused Gaussian beam is presented. A new description of Gaussian beams, called standard beams, is introduced. A comparison is made between the values of the beam-shape coefficients in the framework of the localized approximation and the beam-shape coefficients of standard beams. This comparison leads to new insights concerning the electromagnetic description of laser beams. The relevance of our discussion is enhanced by a demonstration that the localized approximation provides a very satisfactory description of top-hat beams as well.

  20. Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.

    PubMed

    Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R

    2016-11-01

    Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. 40 CFR 90.708 - Cumulative Sum (CumSum) procedure.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...

  2. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  3. Acoustic Correlates of Compensatory Adjustments to the Glottic and Supraglottic Structures in Patients with Unilateral Vocal Fold Paralysis

    PubMed Central

    2015-01-01

    The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690

  4. Dynamics of the standard deviations of three wind velocity components from the data of acoustic sounding

    NASA Astrophysics Data System (ADS)

    Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.

    2017-11-01

    Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.

  5. A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.

    PubMed

    Rhiel, G Steven

    2007-02-01

    In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.

  6. Reconstructing the dark sector interaction with LISA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Rong-Gen; Yang, Tao; Tamanini, Nicola, E-mail: cairg@itp.ac.cn, E-mail: nicola.tamanini@cea.fr, E-mail: yangtao@itp.ac.cn

    We perform a forecast analysis of the ability of the LISA space-based interferometer to reconstruct the dark sector interaction using gravitational wave standard sirens at high redshift. We employ Gaussian process methods to reconstruct the distance-redshift relation in a model independent way. We adopt simulated catalogues of standard sirens given by merging massive black hole binaries visible by LISA, with an electromagnetic counterpart detectable by future telescopes. The catalogues are based on three different astrophysical scenarios for the evolution of massive black hole mergers based on the semi-analytic model of E. Barausse, Mon. Not. Roy. Astron. Soc. 423 (2012) 2533.more » We first use these standard siren datasets to assess the potential of LISA in reconstructing a possible interaction between vacuum dark energy and dark matter. Then we combine the LISA cosmological data with supernovae data simulated for the Dark Energy Survey. We consider two scenarios distinguished by the time duration of the LISA mission: 5 and 10 years. Using only LISA standard siren data, the dark sector interaction can be well reconstructed from redshift z ∼1 to z ∼3 (for a 5 years mission) and z ∼1 up to z ∼5 (for a 10 years mission), though the reconstruction is inefficient at lower redshift. When combined with the DES datasets, the interaction is well reconstructed in the whole redshift region from 0 z ∼ to z ∼3 (5 yr) and z ∼0 to z ∼5 (10 yr), respectively. Massive black hole binary standard sirens can thus be used to constrain the dark sector interaction at redshift ranges not reachable by usual supernovae datasets which probe only the z ∼< 1.5 range. Gravitational wave standard sirens will not only constitute a complementary and alternative way, with respect to familiar electromagnetic observations, to probe the cosmic expansion, but will also provide new tests to constrain possible deviations from the standard ΛCDM dynamics, especially at high redshift.« less

  7. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    NASA Astrophysics Data System (ADS)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  8. Propagation of a Gaussian-beam wave in general anisotropic turbulence

    NASA Astrophysics Data System (ADS)

    Andrews, L. C.; Phillips, R. L.; Crabbs, R.

    2014-10-01

    Mathematical models for a Gaussian-beam wave propagating through anisotropic non-Kolmogorov turbulence have been developed in the past by several researchers. In previous publications, the anisotropic spatial power spectrum model was based on the assumption that propagation was in the z direction with circular symmetry maintained in the orthogonal xy-plane throughout the path. In the present analysis, however, the anisotropic spectrum model is no longer based on a single anisotropy parameter—instead, two such parameters are introduced in the orthogonal xyplane so that circular symmetry in this plane is no longer required. In addition, deviations from the 11/3 power-law behavior in the spectrum model are allowed by assuming power-law index variations 3 < α < 4 . In the current study we develop theoretical models for beam spot size, spatial coherence, and scintillation index that are valid in weak irradiance fluctuation regimes as well as in deep turbulence, or strong irradiance fluctuation regimes. These new results are compared with those derived from the more specialized anisotropic spectrum used in previous analyses.

  9. Performance metrics for the assessment of satellite data products: an ocean color case study

    PubMed Central

    Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy

    2018-01-01

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296

  10. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  11. High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.

    PubMed

    Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei

    2017-07-01

    Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.

  12. Multipartite entanglement in three-mode Gaussian states of continuous-variable systems: Quantification, sharing structure, and decoherence

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2006-03-01

    We present a complete analysis of the multipartite entanglement of three-mode Gaussian states of continuous-variable systems. We derive standard forms which characterize the covariance matrix of pure and mixed three-mode Gaussian states up to local unitary operations, showing that the local entropies of pure Gaussian states are bound to fulfill a relationship which is stricter than the general Araki-Lieb inequality. Quantum correlations can be quantified by a proper convex roof extension of the squared logarithmic negativity, the continuous-variable tangle, or contangle. We review and elucidate in detail the proof that in multimode Gaussian states the contangle satisfies a monogamy inequality constraint [G. Adesso and F. Illuminati, New J. Phys8, 15 (2006)]. The residual contangle, emerging from the monogamy inequality, is an entanglement monotone under Gaussian local operations and classical communications and defines a measure of genuine tripartite entanglements. We determine the analytical expression of the residual contangle for arbitrary pure three-mode Gaussian states and study in detail the distribution of quantum correlations in such states. This analysis yields that pure, symmetric states allow for a promiscuous entanglement sharing, having both maximum tripartite entanglement and maximum couplewise entanglement between any pair of modes. We thus name these states GHZ/W states of continuous-variable systems because they are simultaneous continuous-variable counterparts of both the GHZ and the W states of three qubits. We finally consider the effect of decoherence on three-mode Gaussian states, studying the decay of the residual contangle. The GHZ/W states are shown to be maximally robust against losses and thermal noise.

  13. N2/O2/H2 Dual-Pump Cars: Validation Experiments

    NASA Technical Reports Server (NTRS)

    OByrne, S.; Danehy, P. M.; Cutler, A. D.

    2003-01-01

    The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.

  14. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  15. Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.

    PubMed

    Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O

    2009-04-01

    Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.

  16. Heavy-lifting of gauge theories by cosmic inflation

    NASA Astrophysics Data System (ADS)

    Kumar, Soubhik; Sundrum, Raman

    2018-05-01

    Future measurements of primordial non-Gaussianity can reveal cosmologically produced particles with masses of order the inflationary Hubble scale and their interactions with the inflaton, giving us crucial insights into the structure of fundamental physics at extremely high energies. We study gauge-Higgs theories that may be accessible in this regime, carefully imposing the constraints of gauge symmetry and its (partial) Higgsing. We distinguish two types of Higgs mechanisms: (i) a standard one in which the Higgs scale is constant before and after inflation, where the particles observable in non-Gaussianities are far heavier than can be accessed by laboratory experiments, perhaps associated with gauge unification, and (ii) a "heavy-lifting" mechanism in which couplings to curvature can result in Higgs scales of order the Hubble scale during inflation while reducing to far lower scales in the current era, where they may now be accessible to collider and other laboratory experiments. In the heavy-lifting option, renormalization-group running of terrestrial measurements yield predictions for cosmological non-Gaussianities. If the heavy-lifted gauge theory suffers a hierarchy problem, such as does the Standard Model, confirming such predictions would demonstrate a striking violation of the Naturalness Principle. While observing gauge-Higgs sectors in non-Gaussianities will be challenging given the constraints of cosmic variance, we show that it may be possible with reasonable precision given favorable couplings to the inflationary dynamics.

  17. A novel approach to assess the treatment response using Gaussian random field in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Mengdie; Guo, Ning; Hu, Guangshu

    2016-02-15

    Purpose: The assessment of early therapeutic response to anticancer therapy is vital for treatment planning and patient management in clinic. With the development of personal treatment plan, the early treatment response, especially before any anatomically apparent changes after treatment, becomes urgent need in clinic. Positron emission tomography (PET) imaging serves an important role in clinical oncology for tumor detection, staging, and therapy response assessment. Many studies on therapy response involve interpretation of differences between two PET images, usually in terms of standardized uptake values (SUVs). However, the quantitative accuracy of this measurement is limited. This work proposes a statistically robustmore » approach for therapy response assessment based on Gaussian random field (GRF) to provide a statistically more meaningful scale to evaluate therapy effects. Methods: The authors propose a new criterion for therapeutic assessment by incorporating image noise into traditional SUV method. An analytical method based on the approximate expressions of the Fisher information matrix was applied to model the variance of individual pixels in reconstructed images. A zero mean unit variance GRF under the null hypothesis (no response to therapy) was obtained by normalizing each pixel of the post-therapy image with the mean and standard deviation of the pretherapy image. The performance of the proposed method was evaluated by Monte Carlo simulation, where XCAT phantoms (128{sup 2} pixels) with lesions of various diameters (2–6 mm), multiple tumor-to-background contrasts (3–10), and different changes in intensity (6.25%–30%) were used. The receiver operating characteristic curves and the corresponding areas under the curve were computed for both the proposed method and the traditional methods whose figure of merit is the percentage change of SUVs. The formula for the false positive rate (FPR) estimation was developed for the proposed therapy response assessment utilizing local average method based on random field. The accuracy of the estimation was validated in terms of Euler distance and correlation coefficient. Results: It is shown that the performance of therapy response assessment is significantly improved by the introduction of variance with a higher area under the curve (97.3%) than SUVmean (91.4%) and SUVmax (82.0%). In addition, the FPR estimation serves as a good prediction for the specificity of the proposed method, consistent with simulation outcome with ∼1 correlation coefficient. Conclusions: In this work, the authors developed a method to evaluate therapy response from PET images, which were modeled as Gaussian random field. The digital phantom simulations demonstrated that the proposed method achieved a large reduction in statistical variability through incorporating knowledge of the variance of the original Gaussian random field. The proposed method has the potential to enable prediction of early treatment response and shows promise for application to clinical practice. In future work, the authors will report on the robustness of the estimation theory for application to clinical practice of therapy response evaluation, which pertains to binary discrimination tasks at a fixed location in the image such as detection of small and weak lesion.« less

  18. Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Thompson, Bruce

    2009-01-01

    Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…

  19. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wenfang; Du, Jinjin; Wen, Ruijuan

    We have investigated the transmission spectra of a Fabry-Perot interferometer (FPI) with squeezed vacuum state injection and non-Gaussian detection, including photon number resolving detection and parity detection. In order to show the suitability of the system, parallel studies were made of the performance of two other light sources: coherent state of light and Fock state of light either with classical mean intensity detection or with non-Gaussian detection. This shows that by using the squeezed vacuum state and non-Gaussian detection simultaneously, the resolution of the FPI can go far beyond the cavity standard bandwidth limit based on the current techniques. Themore » sensitivity of the scheme has also been explored and it shows that the minimum detectable sensitivity is better than that of the other schemes.« less

  1. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.

    PubMed

    Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten

    2017-10-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.

  2. The effects of auditory stimulation with music on heart rate variability in healthy women.

    PubMed

    Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de

    2013-07-01

    There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.

  3. The effects of auditory stimulation with music on heart rate variability in healthy women

    PubMed Central

    Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos

    2013-01-01

    OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660

  4. USL/DBMS NASA/PC R and D project C programming standards

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Moreau, Dennis R.

    1984-01-01

    A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.

  5. Standard deviation index for stimulated Brillouin scattering suppression with different homogeneities.

    PubMed

    Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei

    2016-05-10

    We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.

  6. Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.

    2016-06-01

    The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.

  7. Comparing the structure of an emerging market with a mature one under global perturbation

    NASA Astrophysics Data System (ADS)

    Namaki, A.; Jafari, G. R.; Raei, R.

    2011-09-01

    In this paper we investigate the Tehran stock exchange (TSE) and Dow Jones Industrial Average (DJIA) in terms of perturbed correlation matrices. To perturb a stock market, there are two methods, namely local and global perturbation. In the local method, we replace a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series, whereas in the global method, we reconstruct the correlation matrix after replacing the original return series with Gaussian-distributed time series. The local perturbation is just a technical study. We analyze these markets through two statistical approaches, random matrix theory (RMT) and the correlation coefficient distribution. By using RMT, we find that the largest eigenvalue is an influence that is common to all stocks and this eigenvalue has a peak during financial shocks. We find there are a few correlated stocks that make the essential robustness of the stock market but we see that by replacing these return time series with Gaussian-distributed time series, the mean values of correlation coefficients, the largest eigenvalues of the stock markets and the fraction of eigenvalues that deviate from the RMT prediction fall sharply in both markets. By comparing these two markets, we can see that the DJIA is more sensitive to global perturbations. These findings are crucial for risk management and portfolio selection.

  8. Correlations in magnitude series to assess nonlinearities: Application to multifractal models and heartbeat fluctuations.

    PubMed

    Bernaola-Galván, Pedro A; Gómez-Extremera, Manuel; Romance, A Ramón; Carpena, Pedro

    2017-09-01

    The correlation properties of the magnitude of a time series are associated with nonlinear and multifractal properties and have been applied in a great variety of fields. Here we have obtained the analytical expression of the autocorrelation of the magnitude series (C_{|x|}) of a linear Gaussian noise as a function of its autocorrelation (C_{x}). For both, models and natural signals, the deviation of C_{|x|} from its expectation in linear Gaussian noises can be used as an index of nonlinearity that can be applied to relatively short records and does not require the presence of scaling in the time series under study. In a model of artificial Gaussian multifractal signal we use this approach to analyze the relation between nonlinearity and multifractallity and show that the former implies the latter but the reverse is not true. We also apply this approach to analyze experimental data: heart-beat records during rest and moderate exercise. For each individual subject, we observe higher nonlinearities during rest. This behavior is also achieved on average for the analyzed set of 10 semiprofessional soccer players. This result agrees with the fact that other measures of complexity are dramatically reduced during exercise and can shed light on its relationship with the withdrawal of parasympathetic tone and/or the activation of sympathetic activity during physical activity.

  9. BINGO: a code for the efficient computation of the scalar bi-spectrum

    NASA Astrophysics Data System (ADS)

    Hazra, Dhiraj Kumar; Sriramkumar, L.; Martin, Jérôme

    2013-05-01

    We present a new and accurate Fortran code, the BI-spectra and Non-Gaussianity Operator (BINGO), for the efficient numerical computation of the scalar bi-spectrum and the non-Gaussianity parameter fNL in single field inflationary models involving the canonical scalar field. The code can calculate all the different contributions to the bi-spectrum and the parameter fNL for an arbitrary triangular configuration of the wavevectors. Focusing firstly on the equilateral limit, we illustrate the accuracy of BINGO by comparing the results from the code with the spectral dependence of the bi-spectrum expected in power law inflation. Then, considering an arbitrary triangular configuration, we contrast the numerical results with the analytical expression available in the slow roll limit, for, say, the case of the conventional quadratic potential. Considering a non-trivial scenario involving deviations from slow roll, we compare the results from the code with the analytical results that have recently been obtained in the case of the Starobinsky model in the equilateral limit. As an immediate application, we utilize BINGO to examine of the power of the non-Gaussianity parameter fNL to discriminate between various inflationary models that admit departures from slow roll and lead to similar features in the scalar power spectrum. We close with a summary and discussion on the implications of the results we obtain.

  10. Correlations in magnitude series to assess nonlinearities: Application to multifractal models and heartbeat fluctuations

    NASA Astrophysics Data System (ADS)

    Bernaola-Galván, Pedro A.; Gómez-Extremera, Manuel; Romance, A. Ramón; Carpena, Pedro

    2017-09-01

    The correlation properties of the magnitude of a time series are associated with nonlinear and multifractal properties and have been applied in a great variety of fields. Here we have obtained the analytical expression of the autocorrelation of the magnitude series (C|x |) of a linear Gaussian noise as a function of its autocorrelation (Cx). For both, models and natural signals, the deviation of C|x | from its expectation in linear Gaussian noises can be used as an index of nonlinearity that can be applied to relatively short records and does not require the presence of scaling in the time series under study. In a model of artificial Gaussian multifractal signal we use this approach to analyze the relation between nonlinearity and multifractallity and show that the former implies the latter but the reverse is not true. We also apply this approach to analyze experimental data: heart-beat records during rest and moderate exercise. For each individual subject, we observe higher nonlinearities during rest. This behavior is also achieved on average for the analyzed set of 10 semiprofessional soccer players. This result agrees with the fact that other measures of complexity are dramatically reduced during exercise and can shed light on its relationship with the withdrawal of parasympathetic tone and/or the activation of sympathetic activity during physical activity.

  11. figure1.nc

    EPA Pesticide Factsheets

    NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).

  12. Goos-Hänchen and Imbert-Fedorov shifts for astigmatic Gaussian beams

    NASA Astrophysics Data System (ADS)

    Ornigotti, Marco; Aiello, Andrea

    2015-06-01

    In this work we investigate the role of the beam astigmatism in the Goos-Hänchen and Imbert-Fedorov shift. As a case study, we consider a Gaussian beam focused by an astigmatic lens and we calculate explicitly the corrections to the standard formulas for beam shifts due to the astigmatism induced by the lens. Our results show that the different focusing in the longitudinal and transverse direction introduced by an astigmatic lens may enhance the angular part of the shift.

  13. Incorporating Skew into RMS Surface Roughness Probability Distribution

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  14. One-electron propagation in Fermi, Pasta, Ulam disordered chains with Gaussian acoustic pulse pumping

    NASA Astrophysics Data System (ADS)

    Silva, L. D. Da; Dos Santos, J. L. L.; Ranciaro Neto, A.; Sales, M. O.; de Moura, F. A. B. F.

    In this work, we consider a one-electron moving on a Fermi, Pasta, Ulam disordered chain under effect of electron-phonon interaction and a Gaussian acoustic pulse pumping. We describe electronic dynamics using quantum mechanics formalism and the nonlinear atomic vibrations using standard classical physics. Solving numerical equations related to coupled quantum/classical behavior of this system, we study electronic propagation properties. Our calculations suggest that the acoustic pumping associated with the electron-lattice interaction promote a sub-diffusive electronic dynamics.

  15. Demonstration of coherent-state discrimination using a displacement-controlled photon-number-resolving detector.

    PubMed

    Wittmann, Christoffer; Andersen, Ulrik L; Takeoka, Masahiro; Sych, Denis; Leuchs, Gerd

    2010-03-12

    We experimentally demonstrate a new measurement scheme for the discrimination of two coherent states. The measurement scheme is based on a displacement operation followed by a photon-number-resolving detector, and we show that it outperforms the standard homodyne detector which we, in addition, prove to be optimal within all Gaussian operations including conditional dynamics. We also show that the non-Gaussian detector is superior to the homodyne detector in a continuous variable quantum key distribution scheme.

  16. Multipartite entanglement in three-mode Gaussian states of continuous-variable systems: Quantification, sharing structure, and decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Centre for Quantum Computation, DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA; Serafini, Alessio

    2006-03-15

    We present a complete analysis of the multipartite entanglement of three-mode Gaussian states of continuous-variable systems. We derive standard forms which characterize the covariance matrix of pure and mixed three-mode Gaussian states up to local unitary operations, showing that the local entropies of pure Gaussian states are bound to fulfill a relationship which is stricter than the general Araki-Lieb inequality. Quantum correlations can be quantified by a proper convex roof extension of the squared logarithmic negativity, the continuous-variable tangle, or contangle. We review and elucidate in detail the proof that in multimode Gaussian states the contangle satisfies a monogamy inequalitymore » constraint [G. Adesso and F. Illuminati, New J. Phys8, 15 (2006)]. The residual contangle, emerging from the monogamy inequality, is an entanglement monotone under Gaussian local operations and classical communications and defines a measure of genuine tripartite entanglements. We determine the analytical expression of the residual contangle for arbitrary pure three-mode Gaussian states and study in detail the distribution of quantum correlations in such states. This analysis yields that pure, symmetric states allow for a promiscuous entanglement sharing, having both maximum tripartite entanglement and maximum couplewise entanglement between any pair of modes. We thus name these states GHZ/W states of continuous-variable systems because they are simultaneous continuous-variable counterparts of both the GHZ and the W states of three qubits. We finally consider the effect of decoherence on three-mode Gaussian states, studying the decay of the residual contangle. The GHZ/W states are shown to be maximally robust against losses and thermal noise.« less

  17. Intermittent Anisotropic Turbulence Detected by THEMIS in the Magnetosheath

    NASA Astrophysics Data System (ADS)

    Macek, W. M.; Wawrzaszek, A.; Kucharuk, B.; Sibeck, D. G.

    2017-12-01

    Following our previous study of Time History of Events and Macroscale Interactions during Substorms (THEMIS) data, we consider intermittent turbulence in the magnetosheath depending on various conditions of the magnetized plasma behind the Earth’s bow shock and now also near the magnetopause. Namely, we look at the fluctuations of the components of the Elsässer variables in the plane perpendicular to the scale-dependent background magnetic fields and along the local average ambient magnetic fields. We have shown that Alfvén fluctuations often exhibit strong anisotropic non-gyrotropic turbulent intermittent behavior resulting in substantial deviations of the probability density functions from a normal Gaussian distribution with a large kurtosis. In particular, for very high Alfvénic Mach numbers and high plasma beta, we have clear anisotropy with non-Gaussian statistics in the transverse directions. However, along the magnetic field, the kurtosis is small and the plasma is close to equilibrium. On the other hand, intermittency becomes weaker for moderate Alfvén Mach numbers and lower values of the plasma parameter beta. It also seems that the degree of intermittency of turbulence for the outgoing fluctuations propagating relative to the ambient magnetic field is usually similar as for the ingoing fluctuations, which is in agreement with approximate equipartition of energy between these oppositely propagating Alfvén waves. We believe that the different characteristics of this intermittent anisotropic turbulent behavior in various regions of space and astrophysical plasmas can help identify nonlinear structures responsible for deviations of the plasma from equilibrium.

  18. Newton to Einstein — dust to dust

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopp, Michael; Uhlemann, Cora; Haugg, Thomas, E-mail: michael.kopp@physik.lmu.de, E-mail: cora.uhlemann@physik.lmu.de, E-mail: thomas.haugg@physik.lmu.de

    We investigate the relation between the standard Newtonian equations for a pressureless fluid (dust) and the Einstein equations in a double expansion in small scales and small metric perturbations. We find that parts of the Einstein equations can be rewritten as a closed system of two coupled differential equations for the scalar and transverse vector metric perturbations in Poisson gauge. It is then shown that this system is equivalent to the Newtonian system of continuity and Euler equations. Brustein and Riotto (2011) conjectured the equivalence of these systems in the special case where vector perturbations were neglected. We show thatmore » this approach does not lead to the Euler equation but to a physically different one with large deviations already in the 1-loop power spectrum. We show that it is also possible to consistently set to zero the vector perturbations which strongly constrains the allowed initial conditions, in particular excluding Gaussian ones such that inclusion of vector perturbations is inevitable in the cosmological context. In addition we derive nonlinear equations for the gravitational slip and tensor perturbations, thereby extending Newtonian gravity of a dust fluid to account for nonlinear light propagation effects and dust-induced gravitational waves.« less

  19. Search for new phenomena in high-mass final states with a photon and a jet from pp collisions at √{s} = 13 TeV with the ATLAS detector

    NASA Astrophysics Data System (ADS)

    Aaboud, M.; Aad, G.; Abbott, B.; Abdinov, O.; Abeloos, B.; Abidi, S. H.; AbouZeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adachi, S.; Adamczyk, L.; Adelman, J.; Adersberger, M.; Adye, T.; Affolder, A. A.; Afik, Y.; Agatonovic-Jovin, T.; Agheorghiesei, C.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akatsuka, S.; Akerstedt, H.; Åkesson, T. P. A.; Akilli, E.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albicocco, P.; Alconada Verzini, M. J.; Alderweireldt, S. C.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Ali, B.; Aliev, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Alshehri, A. A.; Alstaty, M. I.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amoroso, S.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Angerami, A.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antel, C.; Antonelli, M.; Antonov, A.; Antrim, D. J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Arai, Y.; Araque, J. P.; Araujo Ferraz, V.; Arce, A. T. H.; Ardell, R. E.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Bagnaia, P.; Bahmani, M.; Bahrasemani, H.; Baines, J. T.; Bajic, M.; Baker, O. K.; Bakker, P. J.; Baldin, E. M.; Balek, P.; Balli, F.; Balunas, W. K.; Banas, E.; Bandyopadhyay, A.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisits, M.-S.; Barkeloo, J. T.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska-Blenessy, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barranco Navarro, L.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Beck, H. C.; Becker, K.; Becker, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beermann, T. A.; Begalli, M.; Begel, M.; Behr, J. K.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez, J.; Benjamin, D. P.; Benoit, M.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernardi, G.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertram, I. A.; Bertsche, C.; Bertsche, D.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Bethani, A.; Bethke, S.; Bevan, A. J.; Beyer, J.; Bianchi, R. M.; Biebel, O.; Biedermann, D.; Bielski, R.; Bierwagen, K.; Biesuz, N. V.; Biglietti, M.; Billoud, T. R. V.; Bilokon, H.; Bindi, M.; Bingul, A.; Bini, C.; Biondi, S.; Bisanz, T.; Bittrich, C.; Bjergaard, D. M.; Black, J. E.; Black, K. M.; Blair, R. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blue, A.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bokan, P.; Bold, T.; Boldyrev, A. S.; Bolz, A. E.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Boscherini, D.; Bosman, M.; Bossio Sola, J. D.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozson, A. J.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Braren, F.; Bratzler, U.; Brau, B.; Brau, J. E.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Briglin, D. L.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruni, A.; Bruni, G.; Bruni, L. S.; Bruno, S.; Brunt, BH; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burch, T. J.; Burdin, S.; Burgard, C. D.; Burger, A. M.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Burr, J. T. P.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Cabrera Urbán, S.; Caforio, D.; Cai, H.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Callea, G.; Caloba, L. P.; Calvente Lopez, S.; Calvet, D.; Calvet, S.; Calvet, T. P.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Camincher, C.; Campana, S.; Campanelli, M.; Camplani, A.; Campoverde, A.; Canale, V.; Cano Bret, M.; Cantero, J.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carlson, B. T.; Carminati, L.; Carney, R. M. D.; Caron, S.; Carquin, E.; Carrá, S.; Carrillo-Montoya, G. D.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castelijn, R.; Castillo Gimenez, V.; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Celebi, E.; Ceradini, F.; Cerda Alberich, L.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, W. S.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, C.; Chen, H.; Chen, J.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Cheu, E.; Cheung, K.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chiu, Y. H.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Chow, Y. S.; Christodoulou, V.; Chu, M. C.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, M. R.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Colasurdo, L.; Cole, B.; Colijn, A. P.; Collot, J.; Colombo, T.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Constantinescu, S.; Conti, G.; Conventi, F.; Cooke, M.; Cooper-Sarkar, A. M.; Cormier, F.; Cormier, K. J. R.; Corradi, M.; Corriveau, F.; Cortes-Gonzalez, A.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Creager, R. A.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cueto, A.; Cuhadar Donszelmann, T.; Cukierman, A. R.; Cummings, J.; Curatolo, M.; Cúth, J.; Czekierda, S.; Czodrowski, P.; D'amen, G.; D'Auria, S.; D'eramo, L.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dado, T.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Daneri, M. F.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Daubney, T.; Davey, W.; David, C.; Davidek, T.; Davis, D. R.; Davison, P.; Dawe, E.; Dawson, I.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Maria, A.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vasconcelos Corga, K.; De Vivie De Regie, J. B.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Dehghanian, N.; Deigaard, I.; Del Gaudio, M.; Del Peso, J.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delporte, C.; Delsart, P. A.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Devesa, M. R.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Bello, F. A.; Di Ciaccio, A.; Di Ciaccio, L.; Di Clemente, W. K.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Micco, B.; Di Nardo, R.; Di Petrillo, K. F.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Díez Cornell, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Dodsworth, D.; Doglioni, C.; Dolejsi, J.; Dolezal, Z.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Dubreuil, A.; Duchovni, E.; Duckeck, G.; Ducourthial, A.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudder, A. Chr.; Duffield, E. M.; Duflot, L.; Dührssen, M.; Dulsen, C.; Dumancic, M.; Dumitriu, A. E.; Duncan, A. K.; Dunford, M.; Duperrin, A.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Duvnjak, D.; Dyndal, M.; Dziedzic, B. S.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; El Kosseifi, R.; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Ennis, J. S.; Epland, M. B.; Erdmann, J.; Ereditato, A.; Ernst, M.; Errede, S.; Escalier, M.; Escobar, C.; Esposito, B.; Estrada Pastor, O.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Ezzi, M.; Fabbri, F.; Fabbri, L.; Fabiani, V.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farina, E. M.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Giannelli, M. Faucci; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Fenton, M. J.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Fernandez Perez, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, R. R. M.; Flick, T.; Flierl, B. M.; Flores Castillo, L. R.; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Förster, F. A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Freund, B.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fusayasu, T.; Fuster, J.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Ganguly, S.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; García, C.; García Navarro, J. E.; García Pascual, J. A.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gascon Bravo, A.; Gasnikova, K.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gee, C. N. P.; Geisen, J.; Geisen, M.; Geisler, M. P.; Gellerstedt, K.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; Gentsos, C.; George, S.; Gerbaudo, D.; Geßner, G.; Ghasemi, S.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giangiacomi, N.; Giannetti, P.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giordani, M. P.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugliarelli, G.; Giugni, D.; Giuli, F.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gkountoumis, P.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Gama, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, G.; Gonella, L.; Gongadze, A.; González de la Hoz, S.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gottardo, C. A.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Grabowska-Bold, I.; Gradin, P. O. J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gravila, P. M.; Gray, C.; Gray, H. M.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Grummer, A.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Gui, B.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, W.; Guo, Y.; Gupta, R.; Gupta, S.; Gurbuz, S.; Gustavino, G.; Gutelman, B. J.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Guzik, M. P.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Hageböck, S.; Hagihara, M.; Hakobyan, H.; Haleem, M.; Haley, J.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Han, S.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanke, P.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrison, P. F.; Hartmann, N. M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havener, L. B.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hayakawa, D.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heer, S.; Heidegger, K. K.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Held, A.; Hellman, S.; Helsens, C.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Herde, H.; Herget, V.; Hernández Jiménez, Y.; Herr, H.; Herten, G.; Hertenberger, R.; Hervas, L.; Herwig, T. C.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Higashino, S.; Higón-Rodriguez, E.; Hildebrand, K.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hils, M.; Hinchliffe, I.; Hirose, M.; Hirschbuehl, D.; Hiti, B.; Hladik, O.; Hoad, X.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohn, D.; Holmes, T. R.; Homann, M.; Honda, S.; Honda, T.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hostiuc, A.; Hou, S.; Hoummada, A.; Howarth, J.; Hoya, J.; Hrabovsky, M.; Hrdinka, J.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, P. J.; Hsu, S.-C.; Hu, Q.; Hu, S.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hunter, R. F. H.; Huo, P.; Huseynov, N.; Huston, J.; Huth, J.; Hyneman, R.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Iltzsche, F.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Isacson, M. F.; Ishijima, N.; Ishino, M.; Ishitsuka, M.; Issever, C.; Istin, S.; Ito, F.; Iturbe Ponce, J. M.; Iuppa, R.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, P.; Jacobs, R. M.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansky, R.; Janssen, J.; Janus, M.; Janus, P. A.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Javurkova, M.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jelinskas, A.; Jenni, P.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiang, Z.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Jivan, H.; Johansson, P.; Johns, K. A.; Johnson, C. A.; Johnson, W. J.; Jon-And, K.; Jones, R. W. L.; Jones, S. D.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Juste Rozas, A.; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kaji, T.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kanjir, L.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kar, D.; Karakostas, K.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawade, K.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kay, E. F.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kellermann, E.; Kempster, J. J.; Kendrick, J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khader, M.; Khalil-zada, F.; Khanov, A.; Kharlamov, A. G.; Kharlamova, T.; Khodinov, A.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kilby, C. R.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; Kirchmeier, D.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kitali, V.; Kivernyk, O.; Kladiva, E.; Klapdor-Kleingrothaus, T.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klingl, T.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Köhler, N. M.; Koi, T.; Kolb, M.; Koletsou, I.; Komar, A. A.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotwal, A.; Koulouris, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kourlitis, E.; Kouskoura, V.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozakai, C.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Krauss, D.; Kremer, J. A.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, M. C.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kulinich, Y. P.; Kuna, M.; Kunigo, T.; Kupco, A.; Kupfer, T.; Kuprash, O.; Kurashige, H.; Kurchaninov, L. L.; Kurochkin, Y. A.; Kurth, M. G.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; Ruffa, F. La; Lacasta, C.; Lacava, F.; Lacey, J.; Lack, D. P. J.; Lacker, H.; Lacour, D.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lanfermann, M. C.; Lang, V. S.; Lange, J. C.; Langenberg, R. J.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Lapertosa, A.; Laplace, S.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Lau, T. S.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Le, B.; Le Dortz, O.; Le Guirriec, E.; Le Quilleuc, E. P.; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, G. R.; Lee, S. C.; Lee, L.; Lefebvre, B.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Lerner, G.; Leroy, C.; Les, R.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, D.; Li, B.; Li, Changqiao; Li, H.; Li, L.; Li, Q.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liberti, B.; Liblong, A.; Lie, K.; Liebal, J.; Liebig, W.; Limosani, A.; Lin, K.; Lin, S. C.; Lin, T. H.; Linck, R. A.; Lindquist, B. E.; Lionti, A. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lister, A.; Litke, A. M.; Liu, B.; Liu, H.; Liu, H.; Liu, J. K. K.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo, C. Y.; Lo Sterzo, F.; Lobodzinska, E. M.; Loch, P.; Loebinger, F. K.; Loesle, A.; Loew, K. M.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopez, J. A.; Lopez Paz, I.; Lopez Solis, A.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lu, Y. J.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lutz, M. S.; Luzi, P. M.; Lynn, D.; Lysak, R.; Lytken, E.; Lyu, F.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Machado Miguens, J.; Madaffari, D.; Madar, R.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A. S.; Magerl, V.; Maiani, C.; Maidantchik, C.; Maier, T.; Maio, A.; Majersky, O.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandić, I.; Maneira, J.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J.; Mankinen, K. H.; Mann, A.; Manousos, A.; Mansoulie, B.; Mansour, J. D.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchese, L.; Marchiori, G.; Marcisovsky, M.; Marin Tobon, C. A.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Martensson, M. U. F.; Marti-Garcia, S.; Martin, C. B.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, M.; Martinez Outschoorn, V. I.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Mason, L. H.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Maznas, I.; Mazza, S. M.; Mc Fadden, N. C.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McDonald, E. F.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McNamara, P. C.; McNicol, C. J.; McPherson, R. A.; Meehan, S.; Megy, T. J.; Mehlhase, S.; Mehta, A.; Meideck, T.; Meier, K.; Meirose, B.; Melini, D.; Mellado Garcia, B. R.; Mellenthin, J. D.; Melo, M.; Meloni, F.; Melzer, A.; Menary, S. B.; Meng, L.; Meng, X. T.; Mengarelli, A.; Menke, S.; Meoni, E.; Mergelmeyer, S.; Merlassino, C.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer Zu Theenhausen, H.; Miano, F.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Millar, D. A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Minegishi, Y.; Ming, Y.; Mir, L. M.; Mirto, A.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mizukami, A.; Mjörnmark, J. U.; Mkrtchyan, T.; Mlynarikova, M.; Moa, T.; Mochizuki, K.; Mogg, P.; Mohapatra, S.; Molander, S.; Moles-Valls, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, S.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moschovakos, P.; Mosidze, M.; Moss, H. J.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Moyse, E. J. W.; Muanza, S.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Munoz Sanchez, F. J.; Murray, W. J.; Musheghyan, H.; Muškinja, M.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nelson, M. E.; Nemecek, S.; Nemethy, P.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Newman, P. R.; Ng, T. Y.; Nguyen Manh, T.; Nickerson, R. B.; Nicolaidou, R.; Nielsen, J.; Nikiforou, N.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nishu, N.; Nisius, R.; Nitsche, I.; Nitta, T.; Nobe, T.; Noguchi, Y.; Nomachi, M.; Nomidis, I.; Nomura, M. A.; Nooney, T.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'connor, K.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Oleiro Seabra, L. F.; Olivares Pino, S. A.; Oliveira Damazio, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oppen, H.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Pacheco Rodriguez, L.; Padilla Aranda, C.; Pagan Griso, S.; Paganini, M.; Paige, F.; Palacino, G.; Palazzo, S.; Palestini, S.; Palka, M.; Pallin, D.; Panagiotopoulou, E. St.; Panagoulias, I.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasner, J. M.; Pasqualucci, E.; Passaggio, S.; Pastore, Fr.; Pataraia, S.; Pater, J. R.; Pauly, T.; Pearson, B.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Peri, F.; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, F. H.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pinamonti, M.; Pinfold, J. L.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Pluth, D.; Podberezko, P.; Poettgen, R.; Poggi, R.; Poggioli, L.; Pogrebnyak, I.; Pohl, D.; Pokharel, I.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Ponomarenko, D.; Pontecorvo, L.; Popeneciu, G. A.; Portillo Quintero, D. M.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potti, H.; Poulsen, T.; Poveda, J.; Pozo Astigarraga, M. E.; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Primavera, M.; Prince, S.; Proklova, N.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puri, A.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rangel-Smith, C.; Rashid, T.; Raspopov, S.; Ratti, M. G.; Rauch, D. M.; Rauscher, F.; Rave, S.; Ravinovich, I.; Rawling, J. H.; Raymond, M.; Read, A. L.; Readioff, N. P.; Reale, M.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reed, R. G.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reiss, A.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Resseguie, E. D.; Rettie, S.; Reynolds, E.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rimoldi, M.; Rinaldi, L.; Ripellino, G.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Roberts, R. T.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Rocco, E.; Roda, C.; Rodina, Y.; Rodriguez Bosca, S.; Rodriguez Perez, A.; Rodriguez Rodriguez, D.; Roe, S.; Rogan, C. S.; Røhne, O.; Roloff, J.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Rosati, S.; Rosbach, K.; Rose, P.; Rosien, N.-A.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Rothberg, J.; Rousseau, D.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Rzehorz, G. F.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salazar Loyola, J. E.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sampsonidou, D.; Sánchez, J.; Sanchez Martinez, V.; Sanchez Pineda, A.; Sandaker, H.; Sandbach, R. L.; Sander, C. O.; Sandhoff, M.; Sandoval, C.; Sankey, D. P. C.; Sannino, M.; Sano, Y.; Sansoni, A.; Santoni, C.; Santos, H.; Santoyo Castillo, I.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sato, K.; Sauvan, E.; Savage, G.; Savard, P.; Savic, N.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Schaarschmidt, J.; Schacht, P.; Schachtner, B. M.; Schaefer, D.; Schaefer, L.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schier, S.; Schildgen, L. K.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt-Sommerfeld, K. R.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schott, M.; Schouwenberg, J. F. P.; Schovancova, J.; Schramm, S.; Schuh, N.; Schulte, A.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwarz, T. A.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Sciandra, A.; Sciolla, G.; Scornajenghi, M.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Semprini-Cesari, N.; Senkin, S.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Shen, Y.; Sherafati, N.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shipsey, I. P. J.; Shirabe, S.; Shiyakova, M.; Shlomi, J.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shope, D. R.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sickles, A. M.; Sidebo, P. E.; Sideras Haddad, E.; Sidiropoulou, O.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Siral, I.; Sivoklokov, S. Yu.; Sjölin, J.; Skinner, M. B.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smiesko, J.; Smirnov, N.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, J. W.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snyder, I. M.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Søgaard, A.; Soh, D. A.; Sokhrannyi, G.; Solans Sanchez, C. A.; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Sopczak, A.; Sosa, D.; Sotiropoulou, C. L.; Sottocornola, S.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spieker, T. M.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; St. Denis, R. D.; Stabile, A.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanitzki, M. M.; Stapf, B. S.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Stark, S. H.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Stegler, M.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultan, DMS; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Suruliz, K.; Suster, C. J. E.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Swift, S. P.; Sykora, I.; Sykora, T.; Ta, D.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Tahirovic, E.; Taiblum, N.; Takai, H.; Takashima, R.; Takasugi, E. H.; Takeda, K.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tanaka, J.; Tanaka, M.; Tanaka, R.; Tanaka, S.; Tanioka, R.; Tannenwald, B. B.; Tapia Araya, S.; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, A. C.; Taylor, A. J.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teixeira-Dias, P.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Thais, S. J.; Theveneaux-Pelzer, T.; Thiele, F.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Tian, Y.; Tibbetts, M. J.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorova-Nova, S.; Todt, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Tornambe, P.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Treado, C. J.; Trefzger, T.; Tresoldi, F.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tsang, K. W.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tu, Y.; Tudorache, A.; Tudorache, V.; Tulbure, T. T.; Tuna, A. N.; Turchikhin, S.; Turgeman, D.; Turk Cakir, I.; Turra, R.; Tuts, P. M.; Ucchielli, G.; Ueda, I.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usui, J.; Vacavant, L.; Vacek, V.; Vachon, B.; Vadla, K. O. H.; Vaidya, A.; Valderanis, C.; Valdes Santurio, E.; Valente, M.; Valentinetti, S.; Valero, A.; Valéry, L.; Valkar, S.; Vallier, A.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; van der Graaf, H.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varni, C.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vasquez, G. A.; Vazeille, F.; Vazquez Furelos, D.; Vazquez Schroeder, T.; Veatch, J.; Veeraraghavan, V.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, A. T.; Vermeulen, J. C.; Vetterli, M. C.; Viaux Maira, N.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vishwakarma, A.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vogel, M.; Vokac, P.; Volpi, G.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Wagner, W.; Wagner-Kuhr, J.; Wahlberg, H.; Wahrmund, S.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, Q.; Wang, R.-J.; Wang, R.; Wang, S. M.; Wang, T.; Wang, W.; Wang, W.; Wang, Z.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, A. F.; Webb, S.; Weber, M. S.; Weber, S. M.; Weber, S. W.; Weber, S. A.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weirich, M.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M. D.; Werner, P.; Wessels, M.; Weston, T. D.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A. S.; White, A.; White, M. J.; White, R.; Whiteson, D.; Whitmore, B. W.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winkels, E.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wobisch, M.; Wolf, T. M. H.; Wolff, R.; Wolter, M. W.; Wolters, H.; Wong, V. W. S.; Woods, N. L.; Worm, S. D.; Wosiek, B. K.; Wotschack, J.; Wozniak, K. W.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xi, Z.; Xia, L.; Xu, D.; Xu, L.; Xu, T.; Yabsley, B.; Yacoob, S.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamane, F.; Yamatani, M.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yigitbasi, E.; Yildirim, E.; Yorita, K.; Yoshihara, K.; Young, C.; Young, C. J. S.; Yu, J.; Yu, J.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zacharis, G.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanzi, D.; Zeitnitz, C.; Zemaityte, G.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, L.; Zhang, M.; Zhang, P.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Y.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, M.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Zou, R.; zur Nedden, M.; Zwalinski, L.

    2018-02-01

    A search is performed for new phenomena in events having a photon with high transverse momentum and a jet collected in 36.7 {fb}^{-1} of proton-proton collisions at a centre-of-mass energy of √{s} = 13 TeV recorded with the ATLAS detector at the Large Hadron Collider. The invariant mass distribution of the leading photon and jet is examined to look for the resonant production of new particles or the presence of new high-mass states beyond the Standard Model. No significant deviation from the background-only hypothesis is observed and cross-section limits for generic Gaussian-shaped resonances are extracted. Excited quarks hypothesized in quark compositeness models and high-mass states predicted in quantum black hole models with extra dimensions are also examined in the analysis. The observed data exclude, at 95% confidence level, the mass range below 5.3 TeV for excited quarks and 7.1 TeV (4.4 TeV) for quantum black holes in the Arkani-Hamed-Dimopoulos-Dvali (Randall-Sundrum) model with six (one) extra dimensions.

  20. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGES

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  1. Q(n) species distribution in K2O.2SiO2 glass by 29Si magic angle flipping NMR.

    PubMed

    Davis, Michael C; Kaseman, Derrick C; Parvani, Sahar M; Sanders, Kevin J; Grandinetti, Philip J; Massiot, Dominique; Florian, Pierre

    2010-05-06

    Two-dimensional magic angle flipping (MAF) was employed to measure the Q((n)) distribution in a (29)Si-enriched potassium disilicate glass (K(2)O.2SiO(2)). Relative concentrations of [Q((4))] = 7.2 +/- 0.3%, [Q((3))] = 82.9 +/- 0.1%, and [Q((2))] = 9.8 +/- 0.6% were obtained. Using the thermodynamic model for Q((n)) species disproportionation, these relative concentrations yield an equilibrium constant k(3) = 0.0103 +/- 0.0008, indicating, as expected, that the Q((n)) species distribution is close to binary in the potassium disilicate glass. A Gaussian distribution of isotropic chemical shifts was observed for each Q((n)) species with mean values of -82.74 +/- 0.03, -91.32 +/- 0.01, and -101.67 +/- 0.02 ppm and standard deviations of 3.27 +/- 0.03, 4.19 +/- 0.01, and 5.09 +/- 0.03 ppm for Q((2)), Q((3)), and Q((4)), respectively. Additionally, nuclear shielding anisotropy values of zeta =-85.0 +/- 1.3 ppm, eta = 0.48 +/- 0.02 for Q((2)) and zeta = -74.9 +/- 0.2 ppm, eta = 0.03 +/- 0.01 for Q((3)) were observed in the potassium disilicate glass.

  2. Deep searches for broadband extended gravitational-wave emission bursts by heterogeneous computing

    NASA Astrophysics Data System (ADS)

    van Putten, Maurice H. P. M.

    2017-09-01

    We present a heterogeneous search algorithm for broadband extended gravitational-wave emission, expected from gamma-ray bursts and energetic core-collapse supernovae. It searches the (f,\\dot{f})-plane for long-duration bursts by inner engines slowly exhausting their energy reservoir by matched filtering on a graphics processor unit (GPU) over a template bank of millions of 1 s duration chirps. Parseval's theorem is used to predict the standard deviation σ of the filter output, taking advantage of the near-Gaussian noise in the LIGO S6 data over 350-2000 Hz. Tails exceeding a multiple of σ are communicated back to a central processing unit. This algorithm attains about 65% efficiency overall, normalized to the fast Fourier transform. At about one million correlations per second over data segments of 16 s duration (N=2^{16} samples), better than real-time analysis is achieved on a cluster of about a dozen GPUs. We demonstrate its application to the capture of high-frequency hardware LIGO injections. This algorithm serves as a starting point for deep all-sky searches in both archive data and real-time analysis in current observational runs.

  3. Search for new phenomena in high-mass final states with a photon and a jet from $pp$ collisions at $$\\sqrt{s}$$ = 13 TeV with the ATLAS detector

    DOE PAGES

    Aaboud, M.; Aad, G.; Abbott, B.; ...

    2018-02-03

    A search is performed for new phenomena in events having a photon with high transverse momentum and a jet collected in 36.7 fb -1 of proton–proton collisions at a centre-of-mass energy of s√ = 13 TeV recorded with the ATLAS detector at the Large Hadron Collider. The invariant mass distribution of the leading photon and jet is examined to look for the resonant production of new particles or the presence of new high-mass states beyond the Standard Model. No significant deviation from the background-only hypothesis is observed and cross-section limits for generic Gaussian-shaped resonances are extracted. Excited quarks hypothesized inmore » quark compositeness models and high-mass states predicted in quantum black hole models with extra dimensions are also examined in the analysis. The observed data exclude, at 95% confidence level, the mass range below 5.3 TeV for excited quarks and 7.1 TeV (4.4 TeV) for quantum black holes in the Arkani-Hamed–Dimopoulos–Dvali (Randall–Sundrum) model with six (one) extra dimensions.« less

  4. Communication Limits Due to Photon-Detector Jitter

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.; Farr, William H.

    2008-01-01

    A theoretical and experimental study was conducted of the limit imposed by photon-detector jitter on the capacity of a pulse-position-modulated optical communication system in which the receiver operates in a photon-counting (weak-signal) regime. Photon-detector jitter is a random delay between impingement of a photon and generation of an electrical pulse by the detector. In the study, jitter statistics were computed from jitter measurements made on several photon detectors. The probability density of jitter was mathematically modeled by use of a weighted sum of Gaussian functions. Parameters of the model were adjusted to fit histograms representing the measured-jitter statistics. Likelihoods of assigning detector-output pulses to correct pulse time slots in the presence of jitter were derived and used to compute channel capacities and corresponding losses due to jitter. It was found that the loss, expressed as the ratio between the signal power needed to achieve a specified capacity in the presence of jitter and that needed to obtain the same capacity in the absence of jitter, is well approximated as a quadratic function of the standard deviation of the jitter in units of pulse-time-slot duration.

  5. Vortex reconnection rate, and loop birth rate, for a random wavefield

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2017-04-01

    A time dependent, complex scalar wavefield in three dimensions contains curved zero lines, wave ‘vortices’, that move around. From time to time pairs of these lines contact each other and ‘reconnect’ in a well studied manner, and at other times tiny loops of new line appear from nowhere (births) and grow, or the reverse, existing loops shrink and disappear (deaths). These three types are known to be the only generic events. Here the average rate of their occurrences per unit volume is calculated exactly for a Gaussian random wavefield that has isotropic, stationary statistics, arising from a superposition of an infinity of plane waves in different directions. A simplifying ‘axis fixing’ technique is introduced to achieve this. The resulting formulas are proportional to the standard deviation of angular frequencies, and depend in a simple way on the second and fourth moments of the power spectrum of the plane waves. Reconnections turn out to be more common than births and deaths combined. As an expository preliminary, the case of two dimensions, where the vortices are points, is studied and the average rate of pair creation (and likewise destruction) per unit area is calculated.

  6. Combinatorial approach toward high-throughput analysis of direct methanol fuel cells.

    PubMed

    Jiang, Rongzhong; Rong, Charles; Chu, Deryn

    2005-01-01

    A 40-member array of direct methanol fuel cells (with stationary fuel and convective air supplies) was generated by electrically connecting the fuel cells in series. High-throughput analysis of these fuel cells was realized by fast screening of voltages between the two terminals of a fuel cell at constant current discharge. A large number of voltage-current curves (200) were obtained by screening the voltages through multiple small-current steps. Gaussian distribution was used to statistically analyze the large number of experimental data. The standard deviation (sigma) of voltages of these fuel cells increased linearly with discharge current. The voltage-current curves at various fuel concentrations were simulated with an empirical equation of voltage versus current and a linear equation of sigma versus current. The simulated voltage-current curves fitted the experimental data well. With increasing methanol concentration from 0.5 to 4.0 M, the Tafel slope of the voltage-current curves (at sigma=0.0), changed from 28 to 91 mV.dec-1, the cell resistance from 2.91 to 0.18 Omega, and the power output from 3 to 18 mW.cm-2.

  7. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    PubMed

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  8. GOCI image enhancement using an MTF compensation technique for coastal water applications.

    PubMed

    Oh, Eunsong; Choi, Jong-Kuk

    2014-11-03

    The Geostationary Ocean Color Imager (GOCI) is the first optical sensor in geostationary orbit for monitoring the ocean environment around the Korean Peninsula. This paper discusses on-orbit modulation transfer function (MTF) estimation with the pulse-source method and its compensation results for the GOCI. Additionally, by analyzing the relationship between the MTF compensation effect and the accuracy of the secondary ocean product, we confirmed the optimal MTF compensation parameter for enhancing image quality without variation in the accuracy. In this study, MTF assessment was performed using a natural target because the GOCI system has a spatial resolution of 500 m. For MTF compensation with the Wiener filter, we fitted a point spread function with a Gaussian curve controlled by a standard deviation value (σ). After a parametric analysis for finding the optimal degradation model, the σ value of 0.4 was determined to be an optimal indicator. Finally, the MTF value was enhanced from 0.1645 to 0.2152 without degradation of the accuracy of the ocean color product. Enhanced GOCI images by MTF compensation are expected to recognize small-scale ocean products in coastal areas with sharpened geometric performance.

  9. Effects of Sampling and Spatio/Temporal Granularity in Traffic Monitoring on Anomaly Detectability

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Kawahara, Ryoichi; Mori, Tatsuya; Kondoh, Tsuyoshi; Asano, Shoichiro

    We quantitatively evaluate how sampling and spatio/temporal granularity in traffic monitoring affect the detectability of anomalous traffic. Those parameters also affect the monitoring burden, so network operators face a trade-off between the monitoring burden and detectability and need to know which are the optimal paramter values. We derive equations to calculate the false positive ratio and false negative ratio for given values of the sampling rate, granularity, statistics of normal traffic, and volume of anomalies to be detected. Specifically, assuming that the normal traffic has a Gaussian distribution, which is parameterized by its mean and standard deviation, we analyze how sampling and monitoring granularity change these distribution parameters. This analysis is based on observation of the backbone traffic, which exhibits spatially uncorrelated and temporally long-range dependence. Then we derive the equations for detectability. With those equations, we can answer the practical questions that arise in actual network operations: what sampling rate to set to find the given volume of anomaly, or, if the sampling is too high for actual operation, what granularity is optimal to find the anomaly for a given lower limit of sampling rate.

  10. Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.

    PubMed

    Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian

    2009-04-01

    Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.

  11. Multimode waveguide speckle patterns for compressive sensing.

    PubMed

    Valley, George C; Sefler, George A; Justin Shaw, T

    2016-06-01

    Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.

  12. Nonlinear optical anisotropy and molecular orientational distribution in poly(p-phenylene benzobisthiazole) Langmuir-Blodgett films

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Wada, Tatsuo; Yuba, Tomoyuki; Kakimoto, Masaaki; Imai, Yoshio; Sasabe, Hiroyuki

    1996-06-01

    The orientational distribution and packing of polymer chains were investigated in poly(p-phenylene benzobisthiazole) (PBT) Langmuir-Blodgett (LB) films by nonresonant third-harmonic generation measurement at a wavelength of 1907 nm. The tensor components of the third-harmonic susceptibility on the PBT LB film with a surface pressure of 50 mN/m were determined to be χ(3)XXXX=(16.6±2.5)×10-12 and χ(3)YYYY=(2.0±0.3)×10-12. The large nonlinear optical anisotropy can be explained as a result of highly oriented packing of the polymer chains induced by a flow orientation. A Gaussian distribution function with a standard deviation of σ=0.40 gives a practical description of the orientational distribution of PBT polymer chains. A maximum χ(3) value of (26.8±4.4)×10-12 esu is predicted assuming a perfect alignment of polymer chains. The χ(3)XXXX value increased by factor of 2 with the surface pressure from 30 to 50 mN/m mainly due to the packing density of the polymer chains, while the orientational degree did not change.

  13. Observation of D⁰-D¯⁰ mixing using the CDF II detector.

    PubMed

    Aaltonen, T; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Bae, T; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Bland, K R; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brigliadori, L; Bromberg, C; Brucken, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Butti, P; Buzatu, A; Calamba, A; Camarda, S; Campanelli, M; Canelli, F; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Cho, K; Chokheli, D; Clark, A; Clarke, C; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Cremonesi, M; Cruz, D; Cuevas, J; Culbertson, R; d'Ascenzo, N; Datta, M; de Barbaro, P; Demortier, L; Deninno, M; D'Errico, M; Devoto, F; Di Canto, A; Di Ruzza, B; Dittmann, J R; Donati, S; D'Onofrio, M; Dorigo, M; Driutti, A; Ebina, K; Edgar, R; Elagin, A; Erbacher, R; Errede, S; Esham, B; Farrington, S; Fernández Ramos, J P; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Frisch, H; Funakoshi, Y; Galloni, C; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González López, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gramellini, E; Grinstein, S; Grosso-Pilcher, C; Group, R C; Guimaraes da Costa, J; Hahn, S R; Han, J Y; Happacher, F; Hara, K; Hare, M; Harr, R F; Harrington-Taber, T; Hatakeyama, K; Hays, C; Heinrich, J; Herndon, M; Hocker, A; Hong, Z; Hopkins, W; Hou, S; Hughes, R E; Husemann, U; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kambeitz, M; Kamon, T; Karchin, P E; Kasmi, A; Kato, Y; Ketchum, W; Keung, J; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S H; Kim, S B; Kim, Y J; Kim, Y K; Kimura, N; Kirby, M; Knoepfel, K; Kondo, K; Kong, D J; Konigsberg, J; Kotwal, A V; Kreps, M; Kroll, J; Kruse, M; Kuhr, T; Kulkarni, N; Kurata, M; Laasanen, A T; Lammel, S; Lancaster, M; Lannon, K; Latino, G; Lee, H S; Lee, J S; Leo, S; Leone, S; Lewis, J D; Limosani, A; Lipeles, E; Lister, A; Liu, H; Liu, Q; Liu, T; Lockwitz, S; Loginov, A; Lucchesi, D; Lucà, A; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maestro, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Marchese, L; Margaroli, F; Marino, P; Martínez, M; Matera, K; Mattson, M E; Mazzacane, A; Mazzanti, P; McNulty, R; Mehta, A; Mehtala, P; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M J; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Nigmanov, T; Nodulman, L; Noh, S Y; Norniella, O; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Pagliarone, C; Palencia, E; Palni, P; Papadimitriou, V; Parker, W; Pauletta, G; Paulini, M; Paus, C; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Poprocki, S; Potamianos, K; Pranko, A; Prokoshin, F; Ptohos, F; Punzi, G; Ranjan, N; Redondo Fernández, I; Renton, P; Rescigno, M; Rimondi, F; Ristori, L; Robson, A; Rodriguez, T; Rolli, S; Ronzani, M; Roser, R; Rosner, J L; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Sakumoto, W K; Sakurai, Y; Santi, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, E E; Schwarz, T; Scodellaro, L; Scuri, F; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shochet, M; Shreyber-Tecker, I; Simonenko, A; Sliwa, K; Smith, J R; Snider, F D; Song, H; Sorin, V; St Denis, R; Stancari, M; Stentz, D; Strologas, J; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thomson, E; Thukral, V; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Ukegawa, F; Uozumi, S; Vázquez, F; Velev, G; Vellidis, C; Vernieri, C; Vidal, M; Vilar, R; Vizán, J; Vogel, M; Volpi, G; Wagner, P; Wallny, R; Wang, S M; Waters, D; Wester, W C; Whiteson, D; Wicklund, A B; Wilbur, S; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamato, D; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Zanetti, A M; Zeng, Y; Zhou, C; Zucchelli, S

    2013-12-06

    We measure the time dependence of the ratio of decay rates for D0→K(+)π(-) to the Cabibbo-favored decay D(0)→K(-)π(+). The charge conjugate decays are included. A signal of 3.3×10(4) D(*+)→π(+)D(0), D(0)→K(+)π(-) decays is obtained with D0 proper decay times between 0.75 and 10 mean D0 lifetimes. The data were recorded with the CDF II detector at the Fermilab Tevatron and correspond to an integrated luminosity of 9.6  fb(-1) for pp¯ collisions at √s=1.96  TeV. Assuming CP conservation, we search for D0-D¯0 mixing and measure the mixing parameters to be R(D)=(3.51±0.35)×10(-3), y'=(4.3±4.3)×10(-3), and x'2=(0.08±0.18)×10(-3). We report Bayesian probability intervals in the x'2-y' plane and find that the significance of excluding the no-mixing hypothesis is equivalent to 6.1 Gaussian standard deviations, providing the second observation of D0-D¯0 mixing from a single experiment.

  14. Medical Image Fusion Based on Feature Extraction and Sparse Representation

    PubMed Central

    Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246

  15. Observation of the $$\\Xi_b^0$$ Baryon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaltonen, T.; /Helsinki Inst. of Phys.; Alvarez Gonzalez, B.

    The observation of the bottom, strange baryon {Xi}{sup 0}{sub b} through the decay chain {Xi}{sup 0}{sub b} {yields} {Xi}{sup +}{sub c} {pi}{sup -}, where {Xi}{sup +}{sub c} {yields} {Xi}{sup -} {pi}{sup +} {pi}{sup +}, {Xi}{sup -} {yields} {Lambda} {yields} p {pi}{sup -}, is reported using data corresponding to an integrated luminosity of 4.2 ft{sup -1} from p{anti p} collisions at {radical}{ovr s} = 1.96 TeV recorded with the Collider Detector at Fermilab. A signal of 25.3{sup +5.6}{sub -5.4} candidates is observed whose probability of arising from a background fluctuation is 3.6 x 10{sup -12}, corresponding to 6.8 Gaussian standard deviations.more » The {Xi}{sup 0}{sub b} mass is measured to be 5787.8 {+-} 5.0(stat) {+-} 1.3(syst) MeV/c{sup 2}. In addition, the {Xi}{sup -}{sub b} is observed through the process {Xi}{sup -}{sub b} {yields} {Xi}{sup 0}{sub c} {pi}{sup -}, where {Xi}{sup 0}{sub c} {yields} {Xi}{sup -} {pi}{sup +}, {Xi}{sup -} {yields} {Lambda} {pi}{sup -}, and {Lambda} {yields} p {pi}{sup -}.« less

  16. Observation of the {Xi}{sub b}{sup 0} Baryon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaltonen, T.; Brucken, E.; Devoto, F.

    The observation of the bottom, strange baryon {Xi}{sub b}{sup 0} through the decay chain {Xi}{sub b}{sup 0}{yields}{Xi}{sub c}{sup +}{pi}{sup -}, where {Xi}{sub c}{sup +}{yields}{Xi}{sup -}{pi}{sup +}{pi}{sup +}, {Xi}{sup -}{yields}{Lambda}{pi}{sup -}, and {Lambda}{yields}p{pi}{sup -}, is reported by using data corresponding to an integrated luminosity of 4.2 fb{sup -1} from pp collisions at {radical}(s)=1.96 TeV recorded with the Collider Detector at Fermilab. A signal of 25.3{sub -5.4}{sup +5.6} candidates is observed whose probability of arising from a background fluctuation is 3.6x10{sup -12}, corresponding to 6.8 Gaussian standard deviations. The {Xi}{sub b}{sup 0} mass is measured to be 5787.8{+-}5.0(stat){+-}1.3(syst) MeV/c{sup 2}. In addition,more » the {Xi}{sub b}{sup -} baryon is observed through the process {Xi}{sub b}{sup -}{yields}{Xi}{sub c}{sup 0}{pi}{sup -}, where {Xi}{sub c}{sup 0}{yields}{Xi}{sup -}{pi}{sup +}, {Xi}{sup -}{yields}{Lambda}{pi}{sup -}, and {Lambda}{yields}p{pi}{sup -}.« less

  17. Formation of thermochemical laser-induced periodic surface structures on Ti films by a femtosecond IR Gaussian beam: regimes, limiting factors, and optical properties

    NASA Astrophysics Data System (ADS)

    Dostovalov, A. V.; Korolkov, V. P.; Babin, S. A.

    2017-01-01

    The formation of thermochemical laser-induced periodic surface structures (TLIPSS) on 400-nm Ti films deposited onto a glass substrate is investigated under irradiation by a femtosecond laser with a wavelength of 1026 nm, pulse duration of 232 fs, repetition rate of 200 kHz, and with different spot sizes of 4-21 μm. The optimal fluence for TLIPSS formation reduces monotonously with increasing the spot diameter in the range. It is found that the standard deviation of the TLIPSS period depends significantly on the beam size and reaches approximately 2% when the beam diameter is in the range of 10-21 μm. In addition to TLIPSS formation with the main period slightly smaller than the laser wavelength, an effect of TLIPSS spatial frequency doubling is detected. The optical properties of TLIPSS (reflection spectrum and diffraction efficiency at different incident angles and polarizations) are investigated and compared with theoretical ones to give a basis for the development of an optical inspecting method. The refractive index and absorption coefficient of oxidized ridges of the TLIPSS are theoretically estimated by simulation of the experimental reflection spectrum in the zeroth diffraction order.

  18. Search for new phenomena in high-mass final states with a photon and a jet from $pp$ collisions at $$\\sqrt{s}$$ = 13 TeV with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaboud, M.; Aad, G.; Abbott, B.

    A search is performed for new phenomena in events having a photon with high transverse momentum and a jet collected in 36.7 fb -1 of proton–proton collisions at a centre-of-mass energy of s√ = 13 TeV recorded with the ATLAS detector at the Large Hadron Collider. The invariant mass distribution of the leading photon and jet is examined to look for the resonant production of new particles or the presence of new high-mass states beyond the Standard Model. No significant deviation from the background-only hypothesis is observed and cross-section limits for generic Gaussian-shaped resonances are extracted. Excited quarks hypothesized inmore » quark compositeness models and high-mass states predicted in quantum black hole models with extra dimensions are also examined in the analysis. The observed data exclude, at 95% confidence level, the mass range below 5.3 TeV for excited quarks and 7.1 TeV (4.4 TeV) for quantum black holes in the Arkani-Hamed–Dimopoulos–Dvali (Randall–Sundrum) model with six (one) extra dimensions.« less

  19. 75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-01

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2010-P-0517] Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... from the requirements of the standards of identity issued under section 401 of the Federal Food, Drug...

  20. 78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-10

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-P-1189] Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... interstate shipment of experimental packs of food varying from the requirements of standards of identity...

  1. Upgraded FAA Airfield Capacity Model. Volume 2. Technical Description of Revisions

    DTIC Science & Technology

    1981-02-01

    the threshold t k a the time at which departure k is released FIGURE 3-1 TIME AXIS DIAGRAM OF SINGLE RUNWAY OPERATIONS 3-2 J"- SIGMAR the standard...standard deviation of the interarrival time. SIGMAR - the standard deviation of the arrival runway occupancy time. A-5 SINGLE - program subroutine for

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chin-Cheng, E-mail: chen.ccc@gmail.com; Chang, Chang; Mah, Dennis

    Purpose: The spot characteristics for proton pencil beam scanning (PBS) were measured and analyzed over a 16 month period, which included one major site configuration update and six cyclotron interventions. The results provide a reference to establish the quality assurance (QA) frequency and tolerance for proton pencil beam scanning. Methods: A simple treatment plan was generated to produce an asymmetric 9-spot pattern distributed throughout a field of 16 × 18 cm for each of 18 proton energies (100.0–226.0 MeV). The delivered fluence distribution in air was measured using a phosphor screen based CCD camera at three planes perpendicular to themore » beam line axis (x-ray imaging isocenter and up/down stream 15.0 cm). The measured fluence distributions for each energy were analyzed using in-house programs which calculated the spot sizes and positional deviations of the Gaussian shaped spots. Results: Compared to the spot characteristic data installed into the treatment planning system, the 16-month averaged deviations of the measured spot sizes at the isocenter plane were 2.30% and 1.38% in the IEC gantry x and y directions, respectively. The maximum deviation was 12.87% while the minimum deviation was 0.003%, both at the upstream plane. After the collinearity of the proton and x-ray imaging system isocenters was optimized, the positional deviations of the spots were all within 1.5 mm for all three planes. During the site configuration update, spot positions were found to deviate by 6 mm until the tuning parameters file was properly restored. Conclusions: For this beam delivery system, it is recommended to perform a spot size and position check at least monthly and any time after a database update or cyclotron intervention occurs. A spot size deviation tolerance of <15% can be easily met with this delivery system. Deviations of spot positions were <2 mm at any plane up/down stream 15 cm from the isocenter.« less

  3. Technical Note: Spot characteristic stability for proton pencil beam scanning.

    PubMed

    Chen, Chin-Cheng; Chang, Chang; Moyers, Michael F; Gao, Mingcheng; Mah, Dennis

    2016-02-01

    The spot characteristics for proton pencil beam scanning (PBS) were measured and analyzed over a 16 month period, which included one major site configuration update and six cyclotron interventions. The results provide a reference to establish the quality assurance (QA) frequency and tolerance for proton pencil beam scanning. A simple treatment plan was generated to produce an asymmetric 9-spot pattern distributed throughout a field of 16 × 18 cm for each of 18 proton energies (100.0-226.0 MeV). The delivered fluence distribution in air was measured using a phosphor screen based CCD camera at three planes perpendicular to the beam line axis (x-ray imaging isocenter and up/down stream 15.0 cm). The measured fluence distributions for each energy were analyzed using in-house programs which calculated the spot sizes and positional deviations of the Gaussian shaped spots. Compared to the spot characteristic data installed into the treatment planning system, the 16-month averaged deviations of the measured spot sizes at the isocenter plane were 2.30% and 1.38% in the IEC gantry x and y directions, respectively. The maximum deviation was 12.87% while the minimum deviation was 0.003%, both at the upstream plane. After the collinearity of the proton and x-ray imaging system isocenters was optimized, the positional deviations of the spots were all within 1.5 mm for all three planes. During the site configuration update, spot positions were found to deviate by 6 mm until the tuning parameters file was properly restored. For this beam delivery system, it is recommended to perform a spot size and position check at least monthly and any time after a database update or cyclotron intervention occurs. A spot size deviation tolerance of <15% can be easily met with this delivery system. Deviations of spot positions were <2 mm at any plane up/down stream 15 cm from the isocenter.

  4. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)

    1982-01-01

    Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.

  5. A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.

    PubMed

    McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B

    2017-02-01

    We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.

  6. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  7. Effects of vibration on occupant driving performance under simulated driving conditions.

    PubMed

    Azizan, Amzar; Fard, M; Azari, Michael F; Jazar, Reza

    2017-04-01

    Although much research has been devoted to the characterization of the effects of whole-body vibration on seated occupants' comfort, drowsiness induced by vibration has received less attention to date. There are also little validated measurement methods available to quantify whole body vibration-induced drowsiness. Here, the effects of vibration on drowsiness were investigated. Twenty male volunteers were recruited for this experiment. Drowsiness was measured in a driving simulator, before and after 30-min exposure to vibration. Gaussian random vibration, with 1-15 Hz frequency bandwidth was used for excitation. During the driving session, volunteers were required to obey the speed limit of 100 kph and maintain a steady position on the left-hand lane. A deviation in lane position, steering angle variability, and speed deviation were recorded and analysed. Alternatively, volunteers rated their subjective drowsiness by Karolinska Sleepiness Scale (KSS) scores every 5-min. Following 30-min of exposure to vibration, a significant increase of lane deviation, steering angle variability, and KSS scores were observed in all volunteers suggesting the adverse effects of vibration on human alertness level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. On Teaching about the Coefficient of Variation in Introductory Statistics Courses

    ERIC Educational Resources Information Center

    Trafimow, David

    2014-01-01

    The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.

  9. Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-01-01

    This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.

  10. Inference of multi-Gaussian property fields by probabilistic inversion of crosshole ground penetrating radar data using an improved dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Hunziker, Jürg; Laloy, Eric; Linde, Niklas

    2016-04-01

    Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.

  11. Gaussian Process Regression for Uncertainty Estimation on Ecosystem Data

    NASA Astrophysics Data System (ADS)

    Menzer, O.; Moffat, A.; Lasslop, G.; Reichstein, M.

    2011-12-01

    The flow of carbon between terrestrial ecosystems and the atmosphere is mainly driven by nonlinear, complex and time-lagged processes. Understanding the associated ecosystem responses and climatic feedbacks is a key challenge regarding climate change questions such as increasing atmospheric CO2 levels. Usually, the underlying relationships are implemented in models as prescribed functions which interlink numerous meteorological, radiative and gas exchange variables. In contrast, supervised Machine Learning algorithms, such as Artificial Neural Networks or Gaussian Processes, allow for an insight into the relationships directly from a data perspective. Micrometeorological, high resolution measurements at flux towers of the FLUXNET observational network are an essential tool for obtaining quantifications of the ecosystem variables, as they continuously record e.g. CO2 exchange, solar radiation and air temperature. In order to facilitate the investigation of the interactions and feedbacks between these variables, several challenging data properties need to be taken into account: noisy, multidimensional and incomplete (Moffat, Accepted). The task of estimating uncertainties in such micrometeorological measurements can be addressed by Gaussian Processes (GPs), a modern nonparametric method for nonlinear regression. The GP approach has recently been shown to be a powerful modeling tool, regardless of the input dimensionality, the degree of nonlinearity and the noise level (Rasmussen and Williams, 2006). Heteroscedastic Gaussian Processes (HGPs) are a specialized GP method for data with a varying, inhomogeneous noise variance (Goldberg et al., 1998; Kersting et al., 2007), as usually observed in CO2 flux measurements (Richardson et al., 2006). Here, we showed by an evaluation of the HGP performance in several artificial experiments and a comparison to existing nonlinear regression methods, that their outstanding ability is to capture measurement noise levels, concurrently providing reasonable data fits under relatively few assumptions. On the basis of incomplete, half-hourly measured ecosystem data, a HGP was trained to model NEP (Net Ecosystem Production), only with the drivers PPFD (Photosynthetic Photon Flux Density) and Air Temperature. Time information was added to account for the autocorrelation in the flux measurements. Provided with a gap-filled, meteorological time series, NEP and the corresponding random error estimates can then be predicted empirically at high temporal resolution. We report uncertainties in annual sums of CO2 exchange at two flux tower sites in Hainich, Germany and Hesse, France. Similar noise patterns, but different magnitudes between sites were detected, with annual random error estimates of +/- 14.1 gCm^-2yr^-1 and +/- 23.5 gCm^-2yr^-1, respectively, for the year 2001. Existing models calculate uncertainties by evaluating the standard deviation of the model residuals. A comparison to the methods of Reichstein et al. (2005) and Lasslop et al. (2008) showed confidence both in the predictive uncertainties and the annual sums modeled with the HGP approach.

  12. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  13. Histogram-based quantitative evaluation of endobronchial ultrasonography images of peripheral pulmonary lesion.

    PubMed

    Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi

    2015-01-01

    Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p < 0.01). With a cutoff value for standard deviation of 10.5, lung cancer could be diagnosed with an accuracy of 81.7%. Other characteristics investigated were inferior when compared to histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.

  14. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  15. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  16. MCTDH on-the-fly: Efficient grid-based quantum dynamics without pre-computed potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Richings, Gareth W.; Habershon, Scott

    2018-04-01

    We present significant algorithmic improvements to a recently proposed direct quantum dynamics method, based upon combining well established grid-based quantum dynamics approaches and expansions of the potential energy operator in terms of a weighted sum of Gaussian functions. Specifically, using a sum of low-dimensional Gaussian functions to represent the potential energy surface (PES), combined with a secondary fitting of the PES using singular value decomposition, we show how standard grid-based quantum dynamics methods can be dramatically accelerated without loss of accuracy. This is demonstrated by on-the-fly simulations (using both standard grid-based methods and multi-configuration time-dependent Hartree) of both proton transfer on the electronic ground state of salicylaldimine and the non-adiabatic dynamics of pyrazine.

  17. The JCMT Transient Survey: Stochastic and Secular Variability of Protostars and Disks In the Submillimeter Region Observed over 18 Months

    NASA Astrophysics Data System (ADS)

    Johnstone, Doug; Herczeg, Gregory J.; Mairs, Steve; Hatchell, Jennifer; Bower, Geoffrey C.; Kirk, Helen; Lane, James; Bell, Graham S.; Graves, Sarah; Aikawa, Yuri; Chen, Huei-Ru Vivien; Chen, Wen-Ping; Kang, Miju; Kang, Sung-Ju; Lee, Jeong-Eun; Morata, Oscar; Pon, Andy; Scicluna, Peter; Scholz, Aleks; Takahashi, Satoko; Yoo, Hyunju; The JCMT Transient Team

    2018-02-01

    We analyze results from the first 18 months of monthly submillimeter monitoring of eight star-forming regions in the JCMT Transient Survey. In our search for stochastic variability in 1643 bright peaks, only the previously identified source, EC 53, shows behavior well above the expected measurement uncertainty. Another four sources—two disks and two protostars—show moderately enhanced standard deviations in brightness, as expected for stochastic variables. For the two protostars, this apparent variability is the result of single epochs that are much brighter than the mean. In our search for secular brightness variations that are linear in time, we measure the fractional brightness change per year for 150 bright peaks, 50 of which are protostellar. The ensemble distribution of slopes is well fit by a normal distribution with σ ∼ 0.023. Most sources are not rapidly brightening or fading at submillimeter wavelengths. Comparison against time-randomized realizations shows that the width of the distribution is dominated by the uncertainty in the individual brightness measurements of the sources. A toy model for secular variability reveals that an underlying Gaussian distribution of linear fractional brightness change σ = 0.005 would be unobservable in the present sample, whereas an underlying distribution with σ = 0.02 is ruled out. Five protostellar sources, 10% of the protostellar sample, are found to have robust secular measures deviating from a constant flux. The sensitivity to secular brightness variations will improve significantly with a sample over a longer time duration, with an improvement by factor of two expected by the conclusion of our 36 month survey.

  18. Statistical characterization of the nonlinear noise in 2.8 Tbit/s PDM-16QAM CO-OFDM system.

    PubMed

    Wang, Zhe; Qiao, Yaojun; Xu, Yanfei; Ji, Yuefeng

    2013-07-29

    We show for the first time through comprehensive simulations under both uncompensated transmission (UT) and dispersion managed transmission (DMT) systems that the statistical distribution of the nonlinear interference (NLI) within the polarization multiplexed 16-state quadrature amplitude modulation (PM-16QAM) Coherent Optical OFDM (CO-OFDM) system deviates from Gaussian distribution in the absence of amplified spontaneous emission (ASE) noise. We also observe that the dependences of the variance of the NLI noise on both the launch power and the transmission distance (logrithm) seem to be in a simple linear way.

  19. Tirilazad mesylate protects stored erythrocytes against osmotic fragility.

    PubMed

    Epps, D E; Knechtel, T J; Bacznskyj, O; Decker, D; Guido, D M; Buxser, S E; Mathews, W R; Buffenbarger, S L; Lutzke, B S; McCall, J M

    1994-12-01

    The hypoosmotic lysis curve of freshly collected human erythrocytes is consistent with a single Gaussian error function with a mean of 46.5 +/- 0.25 mM NaCl and a standard deviation of 5.0 +/- 0.4 mM NaCl. After extended storage of RBCs under standard blood bank conditions the lysis curve conforms to the sum of two error functions instead of a possible shift in the mean and a broadening of a single error function. Thus, two distinct sub-populations with different fragilities are present instead of a single, broadly distributed population. One population is identical to the freshly collected erythrocytes, whereas the other population consists of osmotically fragile cells. The rate of generation of the new, osmotically fragile, population of cells was used to probe the hypothesis that lipid peroxidation is responsible for the induction of membrane fragility. If it is so, then the antioxidant, tirilazad mesylate (U-74,006f), should protect against this degradation of stored erythrocytes. We found that tirilazad mesylate, at 17 microM (1.5 mol% with respect to membrane lecithin), retards significantly the formation of the osmotically fragile RBCs. Concomitantly, the concentration of free hemoglobin which accumulates during storage is markedly reduced by the drug. Since the presence of the drug also decreases the amount of F2-isoprostanes formed during the storage period, an antioxidant mechanism must be operative. These results demonstrate that tirilazad mesylate significantly decreases the number of fragile erythrocytes formed during storage in the blood bank.

  20. QUENCH: A software package for the determination of quenching curves in Liquid Scintillation counting.

    PubMed

    Cassette, Philippe

    2016-03-01

    In Liquid Scintillation Counting (LSC), the scintillating source is part of the measurement system and its detection efficiency varies with the scintillator used, the vial and the volume and the chemistry of the sample. The detection efficiency is generally determined using a quenching curve, describing, for a specific radionuclide, the relationship between a quenching index given by the counter and the detection efficiency. A quenched set of LS standard sources are prepared by adding a quenching agent and the quenching index and detection efficiency are determined for each source. Then a simple formula is fitted to the experimental points to define the quenching curve function. The paper describes a software package specifically devoted to the determination of quenching curves with uncertainties. The experimental measurements are described by their quenching index and detection efficiency with uncertainties on both quantities. Random Gaussian fluctuations of these experimental measurements are sampled and a polynomial or logarithmic function is fitted on each fluctuation by χ(2) minimization. This Monte Carlo procedure is repeated many times and eventually the arithmetic mean and the experimental standard deviation of each parameter are calculated, together with the covariances between these parameters. Using these parameters, the detection efficiency, corresponding to an arbitrary quenching index within the measured range, can be calculated. The associated uncertainty is calculated with the law of propagation of variances, including the covariance terms. Copyright © 2015 Elsevier Ltd. All rights reserved.

Top