Sample records for widely distributed deviation

  1. On the Linear Relation between the Mean and the Standard Deviation of a Response Time Distribution

    ERIC Educational Resources Information Center

    Wagenmakers, Eric-Jan; Brown, Scott

    2007-01-01

    Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different…

  2. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  3. On the linear relation between the mean and the standard deviation of a response time distribution.

    PubMed

    Wagenmakers, Eric-Jan; Brown, Scott

    2007-07-01

    Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.

  4. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  5. Local and Widely Distributed EEG Activity in Schizophrenia With Prevalence of Negative Symptoms.

    PubMed

    Grin-Yatsenko, Vera A; Ponomarev, Valery A; Pronina, Marina V; Poliakov, Yury I; Plotnikova, Irina V; Kropotov, Juri D

    2017-09-01

    We evaluated EEG frequency abnormalities in resting state (eyes closed and eyes open) EEG in a group of chronic schizophrenia patients as compared with healthy subjects. The study included 3 methods of analysis of deviation of EEG characteristics: genuine EEG, current source density (CSD), and group independent component (gIC). All 3 methods have shown that the EEG in schizophrenia patients is characterized by enhanced low-frequency (delta and theta) and high-frequency (beta) activity in comparison with the control group. However, the spatial pattern of differences was dependent on the type of method used. Comparative analysis has shown that increased EEG power in schizophrenia patients apparently concerns both widely spatially distributed components and local components of signal. Furthermore, the observed differences in the delta and theta range can be described mainly by the local components, and those in the beta range mostly by spatially widely distributed ones. The possible nature of the widely distributed activity is discussed.

  6. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  7. A wide-band fiber optic frequency distribution system employing thermally controlled phase compensation

    NASA Technical Reports Server (NTRS)

    Johnson, Dean; Calhoun, Malcolm; Sydnor, Richard; Lutes, George

    1993-01-01

    An active wide-band fiber optic frequency distribution system employing a thermally controlled phase compensator to stabilize phase variations induced by environmental temperature changes is described. The distribution system utilizes bidirectional dual wavelength transmission to provide optical feedback of induced phase variations of 100 MHz signals propagating along the distribution cable. The phase compensation considered differs from earlier narrow-band phase compensation designs in that it uses a thermally controlled fiber delay coil rather than a VCO or phase modulation to compensate for induced phase variations. Two advantages of the wide-band system over earlier designs are (1) that it provides phase compensation for all transmitted frequencies, and (2) the compensation is applied after the optical interface rather than electronically ahead of it as in earlier schemes. Experimental results on the first prototype shows that the thermal stabilizer reduces phase variations and Allan deviation by a factor of forty over an equivalent uncompensated fiber optic distribution system.

  8. Deviations from Rayleigh statistics in ultrasonic speckle.

    PubMed

    Tuthill, T A; Sperry, R H; Parker, K J

    1988-04-01

    The statistics of speckle patterns in ultrasound images have potential for tissue characterization. In "fully developed speckle" from many random scatterers, the amplitude is widely recognized as possessing a Rayleigh distribution. This study examines how scattering populations and signal processing can produce non-Rayleigh distributions. The first order speckle statistics are shown to depend on random scatterer density and the amplitude and spacing of added periodic scatterers. Envelope detection, amplifier compression, and signal bandwidth are also shown to cause distinct changes in the signal distribution.

  9. Automatic estimation of dynamics of ionospheric disturbances with 1–15 minute lifetimes as derived from ISTP SB RAS fast chirp-ionosonde data

    NASA Astrophysics Data System (ADS)

    Berngardt, Oleg; Bubnova, Tatyana; Podlesnyi, Aleksey

    2018-03-01

    We propose and test a method of analyzing ionograms of vertical ionospheric sounding, which is based on detecting deviations of the shape of an ionogram from its regular (averaged) shape. We interpret these deviations in terms of reflection from the electron density irregularities at heights corresponding to the effective height. We examine the irregularities thus discovered within the framework of a model of a localized uniformly moving irregularity, and determine their characteristic parameters: effective heights and observed vertical velocities. We analyze selected experimental data for three seasons (spring, winter, autumn) obtained nearby Irkutsk with a fast chirp ionosonde of ISTP SB RAS in 2013-2015. The analysis of six days of observations conducted in these seasons has shown that in the observed vertical drift of the irregularities there are two characteristic distributions: wide velocity distribution with nearly 0 m/s mean and with the standard deviation of ∼250 m/s and narrow distribution with nearly -160 m/s mean. The analysis has demonstrated the effectiveness of the proposed algorithm for the automatic analysis of vertical sounding data with high repetition rate.

  10. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.

    PubMed

    Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi

    2016-10-01

    Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. MUSiC - Model-independent search for deviations from Standard Model predictions in CMS

    NASA Astrophysics Data System (ADS)

    Pieta, Holger

    2010-02-01

    We present an approach for a model independent search in CMS. Systematically scanning the data for deviations from the standard model Monte Carlo expectations, such an analysis can help to understand the detector and tune event generators. By minimizing the theoretical bias the analysis is furthermore sensitive to a wide range of models for new physics, including the uncounted number of models not-yet-thought-of. After sorting the events into classes defined by their particle content (leptons, photons, jets and missing transverse energy), a minimally prejudiced scan is performed on a number of distributions. Advanced statistical methods are used to determine the significance of the deviating regions, rigorously taking systematic uncertainties into account. A number of benchmark scenarios, including common models of new physics and possible detector effects, have been used to gauge the power of such a method. )

  12. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  13. On the Distribution of Protein Refractive Index Increments

    PubMed Central

    Zhao, Huaying; Brown, Patrick H.; Schuck, Peter

    2011-01-01

    The protein refractive index increment, dn/dc, is an important parameter underlying the concentration determination and the biophysical characterization of proteins and protein complexes in many techniques. In this study, we examine the widely used assumption that most proteins have dn/dc values in a very narrow range, and reappraise the prediction of dn/dc of unmodified proteins based on their amino acid composition. Applying this approach in large scale to the entire set of known and predicted human proteins, we obtain, for the first time, to our knowledge, an estimate of the full distribution of protein dn/dc values. The distribution is close to Gaussian with a mean of 0.190 ml/g (for unmodified proteins at 589 nm) and a standard deviation of 0.003 ml/g. However, small proteins <10 kDa exhibit a larger spread, and almost 3000 proteins have values deviating by more than two standard deviations from the mean. Due to the widespread availability of protein sequences and the potential for outliers, the compositional prediction should be convenient and provide greater accuracy than an average consensus value for all proteins. We discuss how this approach should be particularly valuable for certain protein classes where a high dn/dc is coincidental to structural features, or may be functionally relevant such as in proteins of the eye. PMID:21539801

  14. On the distribution of protein refractive index increments.

    PubMed

    Zhao, Huaying; Brown, Patrick H; Schuck, Peter

    2011-05-04

    The protein refractive index increment, dn/dc, is an important parameter underlying the concentration determination and the biophysical characterization of proteins and protein complexes in many techniques. In this study, we examine the widely used assumption that most proteins have dn/dc values in a very narrow range, and reappraise the prediction of dn/dc of unmodified proteins based on their amino acid composition. Applying this approach in large scale to the entire set of known and predicted human proteins, we obtain, for the first time, to our knowledge, an estimate of the full distribution of protein dn/dc values. The distribution is close to Gaussian with a mean of 0.190 ml/g (for unmodified proteins at 589 nm) and a standard deviation of 0.003 ml/g. However, small proteins <10 kDa exhibit a larger spread, and almost 3000 proteins have values deviating by more than two standard deviations from the mean. Due to the widespread availability of protein sequences and the potential for outliers, the compositional prediction should be convenient and provide greater accuracy than an average consensus value for all proteins. We discuss how this approach should be particularly valuable for certain protein classes where a high dn/dc is coincidental to structural features, or may be functionally relevant such as in proteins of the eye. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  15. Checking distributional assumptions for pharmacokinetic summary statistics based on simulations with compartmental models.

    PubMed

    Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V

    2016-08-12

    Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.

  16. Biological modulation of the earth's atmosphere

    NASA Technical Reports Server (NTRS)

    Margulis, L.; Lovelock, J. E.

    1974-01-01

    Review of the evidence that the earth's atmosphere is regulated by life on the surface so that the probability of growth of the entire biosphere is maximized. Acidity, gas composition including oxygen level, and ambient temperature are enormously important determinants for the distribution of life. The earth's atmosphere deviates greatly from that of the other terrestrial planets in particular with respect to acidity, composition, redox potential and temperature history as predicted from solar luminosity. These deviations from predicted steady state conditions have apparently persisted over millions of years. These anomalies may be evidence for a complex planet-wide homeostasis that is the product of natural selection. Possible homeostatic mechanisms that may be further investigated by both theoretical and experimental methods are suggested.

  17. Clear and Measurable Signature of Modified Gravity in the Galaxy Velocity Field

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.; Barreira, Alexandre; Frenk, Carlos S.; Li, Baojiu; Cole, Shaun

    2014-06-01

    The velocity field of dark matter and galaxies reflects the continued action of gravity throughout cosmic history. We show that the low-order moments of the pairwise velocity distribution v12 are a powerful diagnostic of the laws of gravity on cosmological scales. In particular, the projected line-of-sight galaxy pairwise velocity dispersion σ12(r) is very sensitive to the presence of modified gravity. Using a set of high-resolution N-body simulations, we compute the pairwise velocity distribution and its projected line-of-sight dispersion for a class of modified gravity theories: the chameleon f(R) gravity and Galileon gravity (cubic and quartic). The velocities of dark matter halos with a wide range of masses would exhibit deviations from general relativity at the (5-10)σ level. We examine strategies for detecting these deviations in galaxy redshift and peculiar velocity surveys. If detected, this signature would be a "smoking gun" for modified gravity.

  18. [Rare earth elements contents and distribution characteristics in nasopharyngeal carcinoma tissue].

    PubMed

    Zhang, Xiangmin; Lan, Xiaolin; Zhang, Lingzhen; Xiao, Fufu; Zhong, Zhaoming; Ye, Guilin; Li, Zong; Li, Shaojin

    2016-03-01

    To investigate the rare earth elements(REEs) contents and distribution characteristics in nasopharyngeal carcinoma( NPC) tissue in Gannan region. Thirty patients of NPC in Gannan region were included in this study. The REEs contents were measured by tandem mass spectrometer inductively coupled plasma(ICP-MS/MS) in 30 patients, and the REEs contents and distribution were analyzed. The average standard deviation value of REEs in lung cancer and normal lung tissues was the minimum mostly. Light REEs content was higher than the medium REEs, and medium REEs content was higher than the heavy REEs content. REEs contents changes in nasopharyngeal carcinoma were variable obviously, the absolute value of Nd, Ce, Pr, Gd and other light rare earth elements were variable widely. The degree of changes on Yb, Tb, Ho and other heavy rare earth elements were variable widely, and there was presence of Eu, Ce negative anomaly(δEu=0. 385 5, δCe= 0. 523 4). The distribution characteristic of REEs contents in NPC patients is consistent with the parity distribution. With increasing atomic sequence, the content is decline wavy. Their distribution patterns were a lack of heavy REEs and enrichment of light REEs, and there was Eu , Ce negative anomaly.

  19. Relationship of Hotspots to the Distribution of Surficial Surf-Zone Sediments along the Outer Banks of North Carolina

    NASA Astrophysics Data System (ADS)

    Schupp, C. A.; McNinch, J. E.; List, J. H.; Farris, A. S.

    2002-12-01

    The formation and behavior of hotspots, or sections of the beach that exhibit markedly higher shoreline change rates than adjacent regions, are poorly understood. Several hotspots have been identified on the Outer Banks, a developed barrier island in North Carolina. To better understand hotspot dynamics and the potential relationship to the geologic framework in which they occur, the surf zone between Duck and Bodie Island was surveyed in June 2002 as part of a research effort supported by the U.S. Geological Survey and U.S. Army Corps of Engineers. Swath bathymetry, sidescan sonar, and chirp seismic were used to characterize a region 40 km long and1 km wide. Hotspot locations were pinpointed using standard deviation values for shoreline position as determined by monthly SWASH buggy surveys of the mean high water contour between October 1999 and September 2002. Observational data and sidescan images were mapped to delineate regions of surficial sediment distributions, and regions of interest were ground-truthed via grab samples or visual inspection. General kilometer-scale correlation between acoustic backscatter and high shoreline standard deviation is evident. Acoustic returns are uniform in a region of Duck where standard deviation is low, but backscatter is patchy around the Kitty Hawk hotspot, where standard deviation is higher. Based on ground-truthing of an area further north, these patches are believed to be an older ravinement surface of fine sediment. More detailed analyses of the correlation between acoustic data, standard deviation, and hotspot locations will be presented. Future work will include integration of seismic, bathymetric, and sidescan data to better understand the links between sub-bottom geology, temporal changes in surficial sediments, surf-zone sediment budgets, and short-term changes in shoreline position and morphology.

  20. Bubbles Are Departures from Equilibrium Housing Markets: Evidence from Singapore and Taiwan

    PubMed Central

    Chou, Chung-I; Li, Sai-Ping; Tee, Shang You; Cheong, Siew Ann

    2016-01-01

    The housing prices in many Asian cities have grown rapidly since mid-2000s, leading to many reports of bubbles. However, such reports remain controversial as there is no widely accepted definition for a housing bubble. Previous studies have focused on indices, or assumed that home prices are lognomally distributed. Recently, Ohnishi et al. showed that the tail-end of the distribution of (Japan/Tokyo) becomes fatter during years where bubbles are suspected, but stop short of using this feature as a rigorous definition of a housing bubble. In this study, we look at housing transactions for Singapore (1995 to 2014) and Taiwan (2012 to 2014), and found strong evidence that the equilibrium home price distribution is a decaying exponential crossing over to a power law, after accounting for different housing types. We found positive deviations from the equilibrium distributions in Singapore condominiums and Zhu Zhai Da Lou in the Greater Taipei Area. These positive deviations are dragon kings, which thus provide us with an unambiguous and quantitative definition of housing bubbles. Also, the spatial-temporal dynamics show that bubble in Singapore is driven by price pulses in two investment districts. This finding provides a valuable insight for policymakers on implementation and evaluation of cooling measures. PMID:27812187

  1. Bubbles Are Departures from Equilibrium Housing Markets: Evidence from Singapore and Taiwan.

    PubMed

    Tay, Darrell Jiajie; Chou, Chung-I; Li, Sai-Ping; Tee, Shang You; Cheong, Siew Ann

    2016-01-01

    The housing prices in many Asian cities have grown rapidly since mid-2000s, leading to many reports of bubbles. However, such reports remain controversial as there is no widely accepted definition for a housing bubble. Previous studies have focused on indices, or assumed that home prices are lognomally distributed. Recently, Ohnishi et al. showed that the tail-end of the distribution of (Japan/Tokyo) becomes fatter during years where bubbles are suspected, but stop short of using this feature as a rigorous definition of a housing bubble. In this study, we look at housing transactions for Singapore (1995 to 2014) and Taiwan (2012 to 2014), and found strong evidence that the equilibrium home price distribution is a decaying exponential crossing over to a power law, after accounting for different housing types. We found positive deviations from the equilibrium distributions in Singapore condominiums and Zhu Zhai Da Lou in the Greater Taipei Area. These positive deviations are dragon kings, which thus provide us with an unambiguous and quantitative definition of housing bubbles. Also, the spatial-temporal dynamics show that bubble in Singapore is driven by price pulses in two investment districts. This finding provides a valuable insight for policymakers on implementation and evaluation of cooling measures.

  2. Distributed acoustic sensing: how to make the best out of the Rayleigh-backscattered energy?

    NASA Astrophysics Data System (ADS)

    Eyal, A.; Gabai, H.; Shpatz, I.

    2017-04-01

    Coherent fading noise (also known as speckle noise) affects the SNR and sensitivity of Distributed Acoustic Sensing (DAS) systems and makes them random processes of position and time. As in speckle noise, the statistical distribution of DAS SNR is particularly wide and its standard deviation (STD) roughly equals its mean (σSNR/ ≍ 0.89). Trading resolution for SNR may improve the mean SNR but not necessarily narrow its distribution. Here a new approach to achieve both SNR improvement (by sacrificing resolution) and narrowing of the distribution is introduced. The method is based on acquiring high resolution complex backscatter profiles of the sensing fiber, using them to compute complex power profiles of the fiber which retain phase variation information and filtering of the power profiles. The approach is tested via a computer simulation and demonstrates distribution narrowing up to σSNR/ < 0.2.

  3. Probability Distribution of Turbulent Kinetic Energy Dissipation Rate in Ocean: Observations and Approximations

    NASA Astrophysics Data System (ADS)

    Lozovatsky, I.; Fernando, H. J. S.; Planella-Morato, J.; Liu, Zhiyu; Lee, J.-H.; Jinadasa, S. U. P.

    2017-10-01

    The probability distribution of turbulent kinetic energy dissipation rate in stratified ocean usually deviates from the classic lognormal distribution that has been formulated for and often observed in unstratified homogeneous layers of atmospheric and oceanic turbulence. Our measurements of vertical profiles of micro-scale shear, collected in the East China Sea, northern Bay of Bengal, to the south and east of Sri Lanka, and in the Gulf Stream region, show that the probability distributions of the dissipation rate ɛ˜r in the pycnoclines (r ˜ 1.4 m is the averaging scale) can be successfully modeled by the Burr (type XII) probability distribution. In weakly stratified boundary layers, lognormal distribution of ɛ˜r is preferable, although the Burr is an acceptable alternative. The skewness Skɛ and the kurtosis Kɛ of the dissipation rate appear to be well correlated in a wide range of Skɛ and Kɛ variability.

  4. Explaining mortality rate plateaus

    PubMed Central

    Weitz, Joshua S.; Fraser, Hunter B.

    2001-01-01

    We propose a stochastic model of aging to explain deviations from exponential growth in mortality rates commonly observed in empirical studies. Mortality rate plateaus are explained as a generic consequence of considering death in terms of first passage times for processes undergoing a random walk with drift. Simulations of populations with age-dependent distributions of viabilities agree with a wide array of experimental results. The influence of cohort size is well accounted for by the stochastic nature of the model. PMID:11752476

  5. Group identification in Indonesian stock market

    NASA Astrophysics Data System (ADS)

    Nurriyadi Suparno, Ervano; Jo, Sung Kyun; Lim, Kyuseong; Purqon, Acep; Kim, Soo Yong

    2016-08-01

    The characteristic of Indonesian stock market is interesting especially because it represents developing countries. We investigate the dynamics and structures by using Random Matrix Theory (RMT). Here, we analyze the cross-correlation of the fluctuations of the daily closing price of stocks from the Indonesian Stock Exchange (IDX) between January 1, 2007, and October 28, 2014. The eigenvalue distribution of the correlation matrix consists of noise which is filtered out using the random matrix as a control. The bulk of the eigenvalue distribution conforms to the random matrix, allowing the separation of random noise from original data which is the deviating eigenvalues. From the deviating eigenvalues and the corresponding eigenvectors, we identify the intrinsic normal modes of the system and interpret their meaning based on qualitative and quantitative approach. The results show that the largest eigenvector represents the market-wide effect which has a predominantly common influence toward all stocks. The other eigenvectors represent highly correlated groups within the system. Furthermore, identification of the largest components of the eigenvectors shows the sector or background of the correlated groups. Interestingly, the result shows that there are mainly two clusters within IDX, natural and non-natural resource companies. We then decompose the correlation matrix to investigate the contribution of the correlated groups to the total correlation, and we find that IDX is still driven mainly by the market-wide effect.

  6. Observation of low energy protons in the geomagnetic tail at lunar distances. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hardy, D. A.

    1974-01-01

    Three suprathermal ion detectors stationed on the moon were used to detect a region of plasma flowing antisunward along the ordered field lines of the geomagnetic tail, exterior to the plasma sheet. The particle flow displays an integral flux, a bulk velocity, temperatures, and number densities uniquely different from the other particle regimes traversed by the moon. No consistent deviation in the field was found to correspond with the occurrence of the events, which have an angular distribution extending between 50 and 100 deg and a spatial distribution over a wide region in both the Y sub sm and Z sub sm directions. The duration of observable particles varies widely between tail passages, with an apparent correlation between the number of hours of observation and the Kp index averages over these times. It is proposed that these particles may have entered through the cusp region.

  7. Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.

    PubMed

    Bouhrara, Mustapha; Spencer, Richard G

    2018-06-01

    The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Estimation of distributional parameters for censored trace level water quality data: 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, Robert J.; Helsel, Dennis R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.

  9. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1986-02-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less

  10. Estimation of distributional parameters for censored trace-level water-quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1984-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less

  11. On the ability of consumer electronics microphones for environmental noise monitoring.

    PubMed

    Van Renterghem, Timothy; Thomas, Pieter; Dominguez, Frederico; Dauwe, Samuel; Touhafi, Abdellah; Dhoedt, Bart; Botteldooren, Dick

    2011-03-01

    The massive production of microphones for consumer electronics, and the shift from dedicated processing hardware to PC-based systems, opens the way to build affordable, extensive noise measurement networks. Applications include e.g. noise limit and urban soundscape monitoring, and validation of calculated noise maps. Microphones are the critical components of such a network. Therefore, in a first step, some basic characteristics of 8 microphones, distributed over a wide range of price classes, were measured in a standardized way in an anechoic chamber. In a next step, a thorough evaluation was made of the ability of these microphones to be used for environmental noise monitoring. This was done during a continuous, half-year lasting outdoor experiment, characterized by a wide variety of meteorological conditions. While some microphones failed during the course of this test, it was shown that it is possible to identify cheap microphones that highly correlate to the reference microphone during the full test period. When the deviations are expressed in total A-weighted (road traffic) noise levels, values of less than 1 dBA are obtained, in excess to the deviation amongst reference microphones themselves.

  12. Regeneration mechanisms in Syllidae (Annelida)

    PubMed Central

    Ribeiro, Rannyele P.

    2018-01-01

    Abstract Syllidae is one of the most species‐rich groups within Annelida, with a wide variety of reproductive modes and different regenerative processes. Syllids have striking ability to regenerate their body anteriorly and posteriorly, which in many species is redeployed during sexual (schizogamy) and asexual (fission) reproduction. This review summarizes the available data on regeneration in syllids, covering descriptions of regenerative mechanisms in different species as well as regeneration in relation to reproductive modes. Our survey shows that posterior regeneration is widely distributed in syllids, whereas anterior regeneration is limited in most of the species, excepting those reproducing by fission. The latter reproductive mode is well known for a few species belonging to Autolytinae, Eusyllinae, and Syllinae. Patterns of fission areas have been studied in these animals. Deviations of the regular regeneration pattern or aberrant forms such as bifurcated animals or individuals with multiple heads have been reported for several species. Some of these aberrations show a deviation of the bilateral symmetry and antero‐posterior axis, which, interestingly, can also be observed in the regular branching body pattern of some species of syllids. PMID:29721325

  13. A short note on the maximal point-biserial correlation under non-normality.

    PubMed

    Cheng, Ying; Liu, Haiyan

    2016-11-01

    The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.

  14. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed Central

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-01-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544

  15. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  16. Deviation from Power Law Behavior in Landslide Phenomenon

    NASA Astrophysics Data System (ADS)

    Li, L.; Lan, H.; Wu, Y.

    2013-12-01

    Power law distribution of magnitude is widely observed in many natural hazards (e.g., earthquake, floods, tornadoes, and forest fires). Landslide is unique as the size distribution of landslide is characterized by a power law decrease with a rollover in the small size end. Yet, the emergence of the rollover, i.e., the deviation from power law behavior for small size landslides, remains a mystery. In this contribution, we grouped the forces applied on landslide bodies into two categories: 1) the forces proportional to the volume of failure mass (gravity and friction), and 2) the forces proportional to the area of failure surface (cohesion). Failure occurs when the forces proportional to volume exceed the forces proportional to surface area. As such, given a certain mechanical configuration, the failure volume to failure surface area ratio must exceed a corresponding threshold to guarantee a failure. Assuming all landslides share a uniform shape, which means the volume to surface area ratio of landslide regularly increase with the landslide volume, a cutoff of landslide volume distribution in the small size end can be defined. However, in realistic landslide phenomena, where heterogeneities of landslide shape and mechanical configuration are existent, a simple cutoff of landslide volume distribution does not exist. The stochasticity of landslide shape introduce a probability distribution of the volume to surface area ratio with regard to landslide volume, with which the probability that the volume to surface ratio exceed the threshold can be estimated regarding values of landslide volume. An experiment based on empirical data showed that this probability can induce the power law distribution of landslide volume roll down in the small size end. We therefore proposed that the constraints on the failure volume to failure surface area ratio together with the heterogeneity of landslide geometry and mechanical configuration attribute for the deviation from power law behavior in landslide phenomenon. Figure shows that a rollover of landslide size distribution in the small size end is produced as the probability for V/S (the failure volume to failure surface ratio of landslide) exceeding the mechanical threshold applied to the power law distribution of landslide volume.

  17. Zipf’s Law and the Frequency of Kazak Phonemes in Word Formation

    NASA Astrophysics Data System (ADS)

    Xin, Ruiqing; Li, Yonghong; Yu, Hongzhi

    2018-03-01

    Zipf’s Law is the basis of the principle of Least Effort, and is widely applicable in all natural fields. The occurring frequency of each phoneme in all Kazak words has been counted to testify the application of Zipf’s law in Kazak. Due to the limitation of the sample size, deviation is unavoidable, but overall results indicate that the occurring frequency and the reciprocal rank of each phoneme in Kazak words formation are in line with Zipf’s distribution.

  18. A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.

    PubMed

    Rhiel, G Steven

    2007-02-01

    In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.

  19. A fortran program for Monte Carlo simulation of oil-field discovery sequences

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Davis, J.C.

    1993-01-01

    We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.

  20. Statistical properties of relative weight distributions of four salmonid species and their sampling implications

    USGS Publications Warehouse

    Hyatt, M.W.; Hubert, W.A.

    2001-01-01

    We assessed relative weight (Wr) distributions among 291 samples of stock-to-quality-length brook trout Salvelinus fontinalis, brown trout Salmo trutta, rainbow trout Oncorhynchus mykiss, and cutthroat trout O. clarki from lentic and lotic habitats. Statistics describing Wr sample distributions varied slightly among species and habitat types. The average sample was leptokurtotic and slightly skewed to the right with a standard deviation of about 10, but the shapes of Wr distributions varied widely among samples. Twenty-two percent of the samples had nonnormal distributions, suggesting the need to evaluate sample distributions before applying statistical tests to determine whether assumptions are met. In general, our findings indicate that samples of about 100 stock-to-quality-length fish are needed to obtain confidence interval widths of four Wr units around the mean. Power analysis revealed that samples of about 50 stock-to-quality-length fish are needed to detect a 2% change in mean Wr at a relatively high level of power (beta = 0.01, alpha = 0.05).

  1. Approximations to the distribution of a test statistic in covariance structure analysis: A comprehensive study.

    PubMed

    Wu, Hao

    2018-05-01

    In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.

  2. Effect of Examiner Experience and Technique on the Alternate Cover Test

    PubMed Central

    Anderson, Heather A.; Manny, Ruth E.; Cotter, Susan A.; Mitchell, G. Lynn; Irani, Jasmine A.

    2013-01-01

    Purpose To compare the repeatability of the alternate cover test between experienced and inexperienced examiners and the effects of dissociation time and examiner bias. Methods Two sites each had an experienced examiner train 10 subjects (inexperienced examiners) to perform short and long dissociation time alternate cover test protocols at near. Each site conducted testing sessions with an examiner triad (experienced examiner and two inexperienced examiners) who were masked to each other’s results. Each triad performed the alternate cover test on 24 patients using both dissociation protocols. In an attempt to introduce bias, each of the paired inexperienced examiners was given a different graph of phoria distribution for the general population. Analysis techniques that adjust for correlations introduced when multiple measurements are obtained on the same patient were used to investigate the effect of examiner and dissociation time on each outcome. Results The range of measured deviations spanned 27.5 prism diopters (Δ) base-in to 17.5Δ base-out. The absolute mean difference between experienced and inexperienced examiners was 2.28 ± 2.4Δ and at least 60% of differences were ≤2Δ. Larger deviations were measured with the long dissociation protocol for both experienced and inexperienced examiners (mean difference range = 1.17 to 2.14Δ, p < 0.0001). The percentage of measured small deviations (2Δ base-out to 2Δ base-in) did not differ between inexperienced examiners biased with the narrow vs. wide theoretical distributions (p = 0.41). The magnitude and direction of the deviation had no effect on the size of the differences obtained with different examiners or dissociation times. Conclusions Although inexperienced examiners differed significantly from experienced examiners, most differences were <2Δ suggesting good reliability of inexperienced examiners’ measurements. Examiner bias did not have a substantial effect on inexperienced examiner measurements; however, increased dissociation resulted in larger measured deviations for all examiners. PMID:20125058

  3. Advances in snow cover distributed modelling via ensemble simulations and assimilation of satellite data

    NASA Astrophysics Data System (ADS)

    Revuelto, J.; Dumont, M.; Tuzet, F.; Vionnet, V.; Lafaysse, M.; Lecourt, G.; Vernay, M.; Morin, S.; Cosme, E.; Six, D.; Rabatel, A.

    2017-12-01

    Nowadays snowpack models show a good capability in simulating the evolution of snow in mountain areas. However singular deviations of meteorological forcing and shortcomings in the modelling of snow physical processes, when accumulated on time along a snow season, could produce large deviations from real snowpack state. The evaluation of these deviations is usually assessed with on-site observations from automatic weather stations. Nevertheless the location of these stations could strongly influence the results of these evaluations since local topography may have a marked influence on snowpack evolution. Despite the evaluation of snowpack models with automatic weather stations usually reveal good results, there exist a lack of large scale evaluations of simulations results on heterogeneous alpine terrain subjected to local topographic effects.This work firstly presents a complete evaluation of the detailed snowpack model Crocus over an extended mountain area, the Arve upper catchment (western European Alps). This catchment has a wide elevation range with a large area above 2000m a.s.l. and/or glaciated. The evaluation compares results obtained with distributed and semi-distributed simulations (the latter nowadays used on the operational forecasting). Daily observations of the snow covered area from MODIS satellite sensor, seasonal glacier surface mass balance evolution measured in more than 65 locations and the galciers annual equilibrium line altitude from Landsat/Spot/Aster satellites, have been used for model evaluation. Additionally the latest advances in producing ensemble snowpack simulations for assimilating satellite reflectance data over extended areas will be presented. These advances comprises the generation of an ensemble of downscaled high-resolution meteorological forcing from meso-scale meteorological models and the application of a particle filter scheme for assimilating satellite observations. Despite the results are prefatory, they show a good potential improving snowpack forecasting capabilities.

  4. Deviation from high-entropy configurations in the atomic distributions of a multi-principal-element alloy

    DOE PAGES

    Santodonato, Louis J.; Zhang, Yang; Feygenson, Mikhail; ...

    2015-01-20

    The alloy-design strategy of combining multiple elements in near-equimolar ratios has shown great potential for producing exceptional engineering materials, often known as “high-entropy alloys”. Understanding the elemental distribution, and, thus, the evolution of the configurational entropy during solidification, is undertaken in the present study using the Al 1.3CoCrCuFeNi model alloy. Here we show that even when the material undergoes elemental segregation, precipitation, chemical ordering, and spinodal decomposition, a significant amount of disorder remains, due to the distributions of multiple elements in the major phases. In addition, the results suggest that the high-entropy-alloy-design strategy may be applied to a wide rangemore » of complex materials, and should not be limited to the goal of creating single-phase solid solutions.« less

  5. Distributional properties of relative phase in bimanual coordination.

    PubMed

    James, Eric; Layne, Charles S; Newell, Karl M

    2010-10-01

    Studies of bimanual coordination have typically estimated the stability of coordination patterns through the use of the circular standard deviation of relative phase. The interpretation of this statistic depends upon the assumption of a von Mises distribution. The present study tested this assumption by examining the distributional properties of relative phase in three bimanual coordination patterns. There were significant deviations from the von Mises distribution due to differences in the kurtosis of distributions. The kurtosis depended upon the relative phase pattern performed, with leptokurtic distributions occurring in the in-phase and antiphase patterns and platykurtic distributions occurring in the 30° pattern. Thus, the distributional assumptions needed to validly and reliably use the standard deviation are not necessarily present in relative phase data though they are qualitatively consistent with the landscape properties of the intrinsic dynamics.

  6. Feynman variance for neutrons emitted from photo-fission initiated fission chains - a systematic simulation for selected speacal nuclear materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soltz, R. A.; Danagoulian, A.; Sheets, S.

    Theoretical calculations indicate that the value of the Feynman variance, Y2F for the emitted distribution of neutrons from ssionable exhibits a strong monotonic de- pendence on a the multiplication, M, of a quantity of special nuclear material. In 2012 we performed a series of measurements at the Passport Inc. facility using a 9- MeV bremsstrahlung CW beam of photons incident on small quantities of uranium with liquid scintillator detectors. For the set of objects studies we observed deviations in the expected monotonic dependence, and these deviations were later con rmed by MCNP simulations. In this report, we modify the theorymore » to account for the contri- bution from the initial photo- ssion and benchmark the new theory with a series of MCNP simulations on DU, LEU, and HEU objects spanning a wide range of masses and multiplication values.« less

  7. A 920-kilometer optical fiber link for frequency metrology at the 19th decimal place.

    PubMed

    Predehl, K; Grosche, G; Raupach, S M F; Droste, S; Terra, O; Alnis, J; Legero, Th; Hänsch, T W; Udem, Th; Holzwarth, R; Schnatz, H

    2012-04-27

    Optical clocks show unprecedented accuracy, surpassing that of previously available clock systems by more than one order of magnitude. Precise intercomparisons will enable a variety of experiments, including tests of fundamental quantum physics and cosmology and applications in geodesy and navigation. Well-established, satellite-based techniques for microwave dissemination are not adequate to compare optical clocks. Here, we present phase-stabilized distribution of an optical frequency over 920 kilometers of telecommunication fiber. We used two antiparallel fiber links to determine their fractional frequency instability (modified Allan deviation) to 5 × 10(-15) in a 1-second integration time, reaching 10(-18) in less than 1000 seconds. For long integration times τ, the deviation from the expected frequency value has been constrained to within 4 × 10(-19). The link may serve as part of a Europe-wide optical frequency dissemination network.

  8. Undamped electrostatic plasma waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentini, F.; Perrone, D.; Veltri, P.

    2012-09-15

    Electrostatic waves in a collision-free unmagnetized plasma of electrons with fixed ions are investigated for electron equilibrium velocity distribution functions that deviate slightly from Maxwellian. Of interest are undamped waves that are the small amplitude limit of nonlinear excitations, such as electron acoustic waves (EAWs). A deviation consisting of a small plateau, a region with zero velocity derivative over a width that is a very small fraction of the electron thermal speed, is shown to give rise to new undamped modes, which here are named corner modes. The presence of the plateau turns off Landau damping and allows oscillations withmore » phase speeds within the plateau. These undamped waves are obtained in a wide region of the (k,{omega}{sub R}) plane ({omega}{sub R} being the real part of the wave frequency and k the wavenumber), away from the well-known 'thumb curve' for Langmuir waves and EAWs based on the Maxwellian. Results of nonlinear Vlasov-Poisson simulations that corroborate the existence of these modes are described. It is also shown that deviations caused by fattening the tail of the distribution shift roots off of the thumb curve toward lower k-values and chopping the tail shifts them toward higher k-values. In addition, a rule of thumb is obtained for assessing how the existence of a plateau shifts roots off of the thumb curve. Suggestions are made for interpreting experimental observations of electrostatic waves, such as recent ones in nonneutral plasmas.« less

  9. Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data.

    PubMed

    de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff

    2016-09-01

    The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved

  10. Nonequilibrium kinetic boundary condition at the vapor-liquid interface of argon

    NASA Astrophysics Data System (ADS)

    Ishiyama, Tatsuya; Fujikawa, Shigeo; Kurz, Thomas; Lauterborn, Werner

    2013-10-01

    A boundary condition for the Boltzmann equation (kinetic boundary condition, KBC) at the vapor-liquid interface of argon is constructed with the help of molecular dynamics (MD) simulations. The KBC is examined at a constant liquid temperature of 85 K in a wide range of nonequilibrium states of vapor. The present investigation is an extension of a previous one by Ishiyama, Yano, and Fujikawa [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.95.084504 95, 084504 (2005)] and provides a more complete form of the KBC. The present KBC includes a thermal accommodation coefficient in addition to evaporation and condensation coefficients, and these coefficients are determined in MD simulations uniquely. The thermal accommodation coefficient shows an anisotropic behavior at the interface for molecular velocities normal versus tangential to the interface. It is also found that the evaporation and condensation coefficients are almost constant in a fairly wide range of nonequilibrium states. The thermal accommodation coefficient of the normal velocity component is almost unity, while that of the tangential component shows a decreasing function of the density of vapor incident on the interface, indicating that the tangential velocity distribution of molecules leaving the interface into the vapor phase may deviate from the tangential parts of the Maxwell velocity distribution at the liquid temperature. A mechanism for the deviation of the KBC from the isotropic Maxwell KBC at the liquid temperature is discussed in terms of anisotropic energy relaxation at the interface. The liquid-temperature dependence of the present KBC is also discussed.

  11. Hydration of nonelectrolytes in binary aqueous solutions

    NASA Astrophysics Data System (ADS)

    Rudakov, A. M.; Sergievskii, V. V.

    2010-10-01

    Literature data on the thermodynamic properties of binary aqueous solutions of nonelectrolytes that show negative deviations from Raoult's law due largely to the contribution of the hydration of the solute are briefly surveyed. Attention is focused on simulating the thermodynamic properties of solutions using equations of the cluster model. It is shown that the model is based on the assumption that there exists a distribution of stoichiometric hydrates over hydration numbers. In terms of the theory of ideal associated solutions, the equations for activity coefficients, osmotic coefficients, vapor pressure, and excess thermodynamic functions (volume, Gibbs energy, enthalpy, entropy) are obtained in analytical form. Basic parameters in the equations are the hydration numbers of the nonelectrolyte (the mathematical expectation of the distribution of hydrates) and the dispersions of the distribution. It is concluded that the model equations adequately describe the thermodynamic properties of a wide range of nonelectrolytes partly or completely soluble in water.

  12. GTA weld penetration and the effects of deviations in machine variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giedt, W.H.

    1987-07-01

    Analytical models for predicting the temperature distribution during GTA welding are reviewed with the purpose of developing a procedure for investigating the effects of deviations in machine parameters. The objective was to determine the accuracy required in machine settings to obtain reproducible results. This review revealed a wide range of published values (21 to 90%) for the arc heating efficiency. Low values (21 to 65%) were associated with evaluation of efficiency using constant property conduction models. Values from 75 to 90% were determined from calorimetric type measurements and are applicable for more accurate numerical solution procedures. Although numerical solutions canmore » yield better overall weld zone predictions, calculations are lengthy and complex. In view of this and the indication that acceptable agreement with experimental measurements can be achieved with the moving-point-source solution, it was utilized to investigate the effects of deviations or errors in voltage, current, and travel speed on GTA weld penetration. Variations resulting from welding within current goals for voltage (+-0.1 V), current (+-3.0 A), and travel speed (+-2.0%) were found to be +-2 to 4%, with voltage and current being more influential than travel speed.« less

  13. Statistical characteristics of cloud variability. Part 1: Retrieved cloud liquid water path at three ARM sites

    NASA Astrophysics Data System (ADS)

    Huang, Dong; Campos, Edwin; Liu, Yangang

    2014-09-01

    Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness all quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the lognormal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.

  14. Statistical characteristics of cloud variability. Part 1: Retrieved cloud liquid water path at three ARM sites: Observed cloud variability at ARM sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Campos, Edwin; Liu, Yangang

    2014-09-17

    Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less

  15. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2012-09-30

    Estimation Methods for Underwater OFDM 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. 6) Asynchronous Multiuser...multi-input multi-output ( MIMO ) OFDM is also pursued, where it is shown that the proposed hybrid initialization enables drastically improved receiver...are investigated. 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. This work studies a distributed system with

  16. Preliminary results from the White Sands Missile Range sonic boom propagation experiment

    NASA Technical Reports Server (NTRS)

    Willshire, William L., Jr.; Devilbiss, David W.

    1992-01-01

    Sonic boom bow shock amplitude and rise time statistics from a recent sonic boom propagation experiment are presented. Distributions of bow shock overpressure and rise time measured under different atmospheric turbulence conditions for the same test aircraft are quite different. The peak overpressure distributions are skewed positively, indicating a tendency for positive deviations from the mean to be larger than negative deviations. Standard deviations of overpressure distributions measured under moderate turbulence were 40 percent larger than those measured under low turbulence. As turbulence increased, the difference between the median and the mean increased, indicating increased positive overpressure deviations. The effect of turbulence was more readily seen in the rise time distributions. Under moderate turbulence conditions, the rise time distribution means were larger by a factor of 4 and the standard deviations were larger by a factor of 3 from the low turbulence values. These distribution changes resulted in a transition from a peaked appearance of the rise time distribution for the morning to a flattened appearance for the afternoon rise time distributions. The sonic boom propagation experiment consisted of flying three types of aircraft supersonically over a ground-based microphone array with concurrent measurements of turbulence and other meteorological data. The test aircraft were a T-38, an F-15, and an F-111, and they were flown at speeds of Mach 1.2 to 1.3, 30,000 feet above a 16 element, linear microphone array with an inter-element spacing of 200 ft. In two weeks of testing, 57 supersonic passes of the test aircraft were flown from early morning to late afternoon.

  17. Influence of particle size distribution on nanopowder cold compaction processes

    NASA Astrophysics Data System (ADS)

    Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.

    2017-06-01

    Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.

  18. The retest distribution of the visual field summary index mean deviation is close to normal.

    PubMed

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  19. Discrete disorder models for many-body localization

    NASA Astrophysics Data System (ADS)

    Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub

    2018-04-01

    Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.

  20. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  1. A product Pearson-type VII density distribution

    NASA Astrophysics Data System (ADS)

    Nadarajah, Saralees; Kotz, Samuel

    2008-01-01

    The Pearson-type VII distributions (containing the Student's t distributions) are becoming increasing prominent and are being considered as competitors to the normal distribution. Motivated by real examples in decision sciences, Bayesian statistics, probability theory and Physics, a new Pearson-type VII distribution is introduced by taking the product of two Pearson-type VII pdfs. Various structural properties of this distribution are derived, including its cdf, moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Finally, an application to a Bayesian testing problem is illustrated.

  2. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  3. Large deviations in the presence of cooperativity and slow dynamics

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-06-01

    We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.

  4. Kin-Aggregations Explain Chaotic Genetic Patchiness, a Commonly Observed Genetic Pattern, in a Marine Fish.

    PubMed

    Selwyn, Jason D; Hogan, J Derek; Downey-Wall, Alan M; Gurski, Lauren M; Portnoy, David S; Heath, Daniel D

    2016-01-01

    The phenomenon of chaotic genetic patchiness is a pattern commonly seen in marine organisms, particularly those with demersal adults and pelagic larvae. This pattern is usually associated with sweepstakes recruitment and variable reproductive success. Here we investigate the biological underpinnings of this pattern in a species of marine goby Coryphopterus personatus. We find that populations of this species show tell-tale signs of chaotic genetic patchiness including: small, but significant, differences in genetic structure over short distances; a non-equilibrium or "chaotic" pattern of differentiation among locations in space; and within locus, within population deviations from the expectations of Hardy-Weinberg equilibrium (HWE). We show that despite having a pelagic larval stage, and a wide distribution across Caribbean coral reefs, this species forms groups of highly related individuals at small spatial scales (<10 metres). These spatially clustered family groups cause the observed deviations from HWE and local population differentiation, a finding that is rarely demonstrated, but could be more common than previously thought.

  5. Minding Impacting Events in a Model of Stochastic Variance

    PubMed Central

    Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.

    2011-01-01

    We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864

  6. Revealing Hidden Conformational Space of LOV Protein VIVID Through Rigid Residue Scan Simulations

    NASA Astrophysics Data System (ADS)

    Zhou, Hongyu; Zoltowski, Brian D.; Tao, Peng

    2017-04-01

    VIVID(VVD) protein is a Light-Oxygen-Voltage(LOV) domain in circadian clock system. Upon blue light activation, a covalent bond is formed between VVD residue Cys108 and its cofactor flavin adenine dinucleotide(FAD), and prompts VVD switching from Dark state to Light state with significant conformational deviation. However, the mechanism of this local environment initiated global protein conformational change remains elusive. We employed a recently developed computational approach, rigid residue scan(RRS), to systematically probe the impact of the internal degrees of freedom in each amino acid residue of VVD on its overall dynamics by applying rigid body constraint on each residue in molecular dynamics simulations. Key residues were identified with distinctive impacts on Dark and Light states, respectively. All the simulations display wide range of distribution on a two-dimensional(2D) plot upon structural root-mean-square deviations(RMSD) from either Dark or Light state. Clustering analysis of the 2D RMSD distribution leads to 15 representative structures with drastically different conformation of N-terminus, which is also a key difference between Dark and Light states of VVD. Further principle component analyses(PCA) of RRS simulations agree with the observation of distinctive impact from individual residues on Dark and Light states.

  7. Phage display peptide libraries: deviations from randomness and correctives

    PubMed Central

    Ryvkin, Arie; Ashkenazy, Haim; Weiss-Ottolenghi, Yael; Piller, Chen; Pupko, Tal; Gershoni, Jonathan M

    2018-01-01

    Abstract Peptide-expressing phage display libraries are widely used for the interrogation of antibodies. Affinity selected peptides are then analyzed to discover epitope mimetics, or are subjected to computational algorithms for epitope prediction. A critical assumption for these applications is the random representation of amino acids in the initial naïve peptide library. In a previous study, we implemented next generation sequencing to evaluate a naïve library and discovered severe deviations from randomness in UAG codon over-representation as well as in high G phosphoramidite abundance causing amino acid distribution biases. In this study, we demonstrate that the UAG over-representation can be attributed to the burden imposed on the phage upon the assembly of the recombinant Protein 8 subunits. This was corrected by constructing the libraries using supE44-containing bacteria which suppress the UAG driven abortive termination. We also demonstrate that the overabundance of G stems from variant synthesis-efficiency and can be corrected using compensating oligonucleotide-mixtures calibrated by mass spectroscopy. Construction of libraries implementing these correctives results in markedly improved libraries that display random distribution of amino acids, thus ensuring that enriched peptides obtained in biopanning represent a genuine selection event, a fundamental assumption for phage display applications. PMID:29420788

  8. Revealing Hidden Conformational Space of LOV Protein VIVID Through Rigid Residue Scan Simulations

    PubMed Central

    Zhou, Hongyu; Zoltowski, Brian D.; Tao, Peng

    2017-01-01

    VIVID(VVD) protein is a Light-Oxygen-Voltage(LOV) domain in circadian clock system. Upon blue light activation, a covalent bond is formed between VVD residue Cys108 and its cofactor flavin adenine dinucleotide(FAD), and prompts VVD switching from Dark state to Light state with significant conformational deviation. However, the mechanism of this local environment initiated global protein conformational change remains elusive. We employed a recently developed computational approach, rigid residue scan(RRS), to systematically probe the impact of the internal degrees of freedom in each amino acid residue of VVD on its overall dynamics by applying rigid body constraint on each residue in molecular dynamics simulations. Key residues were identified with distinctive impacts on Dark and Light states, respectively. All the simulations display wide range of distribution on a two-dimensional(2D) plot upon structural root-mean-square deviations(RMSD) from either Dark or Light state. Clustering analysis of the 2D RMSD distribution leads to 15 representative structures with drastically different conformation of N-terminus, which is also a key difference between Dark and Light states of VVD. Further principle component analyses(PCA) of RRS simulations agree with the observation of distinctive impact from individual residues on Dark and Light states. PMID:28425502

  9. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  10. Copy number variations and genetic admixtures in three Xinjiang ethnic minority groups

    PubMed Central

    Lou, Haiyi; Li, Shilin; Jin, Wenfei; Fu, Ruiqing; Lu, Dongsheng; Pan, Xinwei; Zhou, Huaigu; Ping, Yuan; Jin, Li; Xu, Shuhua

    2015-01-01

    Xinjiang is geographically located in central Asia, and it has played an important historical role in connecting eastern Eurasian (EEA) and western Eurasian (WEA) people. However, human population genomic studies in this region have been largely underrepresented, especially with respect to studies of copy number variations (CNVs). Here we constructed the first CNV map of the three major ethnic minority groups, the Uyghur, Kazakh and Kirgiz, using Affymetrix Genome-Wide Human SNP Array 6.0. We systematically compared the properties of CNVs we identified in the three groups with the data from representatives of EEA and WEA. The analyses indicated a typical genetic admixture pattern in all three groups with ancestries from both EEA and WEA. We also identified several CNV regions showing significant deviation of allele frequency from the expected genome-wide distribution, which might be associated with population-specific phenotypes. Our study provides the first genome-wide perspective on the CNVs of three major Xinjiang ethnic minority groups and has implications for both evolutionary and medical studies. PMID:25026903

  11. Copy number variations and genetic admixtures in three Xinjiang ethnic minority groups.

    PubMed

    Lou, Haiyi; Li, Shilin; Jin, Wenfei; Fu, Ruiqing; Lu, Dongsheng; Pan, Xinwei; Zhou, Huaigu; Ping, Yuan; Jin, Li; Xu, Shuhua

    2015-04-01

    Xinjiang is geographically located in central Asia, and it has played an important historical role in connecting eastern Eurasian (EEA) and western Eurasian (WEA) people. However, human population genomic studies in this region have been largely underrepresented, especially with respect to studies of copy number variations (CNVs). Here we constructed the first CNV map of the three major ethnic minority groups, the Uyghur, Kazakh and Kirgiz, using Affymetrix Genome-Wide Human SNP Array 6.0. We systematically compared the properties of CNVs we identified in the three groups with the data from representatives of EEA and WEA. The analyses indicated a typical genetic admixture pattern in all three groups with ancestries from both EEA and WEA. We also identified several CNV regions showing significant deviation of allele frequency from the expected genome-wide distribution, which might be associated with population-specific phenotypes. Our study provides the first genome-wide perspective on the CNVs of three major Xinjiang ethnic minority groups and has implications for both evolutionary and medical studies.

  12. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    PubMed

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  13. A lognormal distribution of the lengths of terminal twigs on self-similar branches of elm trees.

    PubMed

    Koyama, Kohei; Yamamoto, Ken; Ushio, Masayuki

    2017-01-11

    Lognormal distributions and self-similarity are characteristics associated with a wide range of biological systems. The sequential breakage model has established a link between lognormal distributions and self-similarity and has been used to explain species abundance distributions. To date, however, there has been no similar evidence in studies of multicellular organismal forms. We tested the hypotheses that the distribution of the lengths of terminal stems of Japanese elm trees (Ulmus davidiana), the end products of a self-similar branching process, approaches a lognormal distribution. We measured the length of the stem segments of three elm branches and obtained the following results: (i) each occurrence of branching caused variations or errors in the lengths of the child stems relative to their parent stems; (ii) the branches showed statistical self-similarity; the observed error distributions were similar at all scales within each branch and (iii) the multiplicative effect of these errors generated variations of the lengths of terminal twigs that were well approximated by a lognormal distribution, although some statistically significant deviations from strict lognormality were observed for one branch. Our results provide the first empirical evidence that statistical self-similarity of an organismal form generates a lognormal distribution of organ sizes. © 2017 The Author(s).

  14. An exact solution for the steady state phase distribution in an array of oscillators coupled on a hexagonal lattice

    NASA Technical Reports Server (NTRS)

    Pogorzelski, Ronald J.

    2004-01-01

    When electronic oscillators are coupled to nearest neighbors to form an array on a hexagonal lattice, the planar phase distributions desired for excitation of a phased array antenna are not steady state solutions of the governing non-linear equations describing the system. Thus the steady state phase distribution deviates from planar. It is shown to be possible to obtain an exact solution for the steady state phase distribution and thus determine the deviation from the desired planar distribution as a function of beam steering angle.

  15. An overview of distributed microgrid state estimation and control for smart grids.

    PubMed

    Rana, Md Masud; Li, Li

    2015-02-12

    Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.

  16. Comparative dosimetry of diode and diamond detectors in electron beams for intraoperative radiation therapy.

    PubMed

    Björk, P; Knöös, T; Nilsson, P

    2000-11-01

    The aim of the present study is to examine the validity of using silicon semiconductor detectors in degraded electron beams with a broad energy spectrum and a wide angular distribution. A comparison is made with diamond detector measurements, which is the dosimeter considered to give the best results provided that dose rate effects are corrected for. Two-dimensional relative absorbed dose distributions in electron beams (6-20 MeV) for intraoperative radiation therapy (IORT) are measured in a water phantom. To quantify deviations between the detectors, a dose comparison tool that simultaneously examines the dose difference and distance to agreement (DTA) is used to evaluate the results in low- and high-dose gradient regions, respectively. Uncertainties of the experimental measurement setup (+/- 1% and +/- 0.5 mm) are taken into account by calculating a composite distribution that fails this dose-difference and DTA acceptance limit. Thus, the resulting area of disagreement should be related to differences in detector performance. The dose distributions obtained with the diode are generally in very good agreement with diamond detector measurements. The buildup region and the dose falloff region show good agreement with increasing electron energy, while the region outside the radiation field close to the water surface shows an increased difference with energy. The small discrepancies in the composite distributions are due to several factors: (a) variation of the silicon-to-water collision stopping-power ratio with electron energy, (b) a more pronounced directional dependence for diodes than for diamonds, and (c) variation of the electron fluence perturbation correction factor with depth. For all investigated treatment cones and energies, the deviation is within dose-difference and DTA acceptance criteria of +/- 3% and +/- 1 mm, respectively. Therefore, p-type silicon diodes are well suited, in the sense that they give results in close agreement with diamond detectors, for practical measurements of relative absorbed dose distributions in degraded electron beams used for IORT.

  17. Discrete Element Method Modeling of Bedload Transport: Towards a physics-based link between bed surface variability and particle entrainment statistics

    NASA Astrophysics Data System (ADS)

    Ghasemi, A.; Borhani, S.; Viparelli, E.; Hill, K. M.

    2017-12-01

    The Exner equation provides a formal mathematical link between sediment transport and bed morphology. It is typically represented in a discrete formulation where there is a sharp geometric interface between the bedload layer and the bed, below which no particles are entrained. For high temporally and spatially resolved models, this is strictly correct, but typically this is applied in such a way that spatial and temporal fluctuations in the bed surface (bedforms and otherwise) are not captured. This limits the extent to which the exchange between particles in transport and the sediment bed are properly represented, particularly problematic for mixed grain size distributions that exhibit segregation. Nearly two decades ago, Parker (2000) provided a framework for a solution to this dilemma in the form of a probabilistic Exner equation, partially experimentally validated by Wong et al. (2007). We present a computational study designed to develop a physics-based framework for understanding the interplay between physical parameters of the bed and flow and parameters in the Parker (2000) probabilistic formulation. To do so we use Discrete Element Method simulations to relate local time-varying parameters to long-term macroscopic parameters. These include relating local grain size distribution and particle entrainment and deposition rates to long- average bed shear stress and the standard deviation of bed height variations. While relatively simple, these simulations reproduce long-accepted empirically determined transport behaviors such as the Meyer-Peter and Muller (1948) relationship. We also find that these simulations reproduce statistical relationships proposed by Wong et al. (2007) such as a Gaussian distribution of bed heights whose standard deviation increases with increasing bed shear stress. We demonstrate how the ensuing probabilistic formulations provide insight into the transport and deposition of both narrow and wide grain size distribution.

  18. Influence of atypical retardation pattern on the peripapillary retinal nerve fibre distribution assessed by scanning laser polarimetry and optical coherence tomography.

    PubMed

    Schrems, W A; Laemmer, R; Hoesl, L M; Horn, F K; Mardin, C Y; Kruse, F E; Tornow, R P

    2011-10-01

    To investigate the influence of atypical retardation pattern (ARP) on the distribution of peripapillary retinal nerve fibre layer (RNFL) thickness measured with scanning laser polarimetry in healthy individuals and to compare these results with RNFL thickness from spectral domain optical coherence tomography (OCT) in the same subjects. 120 healthy subjects were investigated in this study. All volunteers received detailed ophthalmological examination, GDx variable corneal compensation (VCC) and Spectralis-OCT. The subjects were divided into four subgroups according to their typical scan score (TSS): very typical with TSS=100, typical with 99 ≥ TSS ≥ 91, less typical with 90 ≥ TSS ≥ 81 and atypical with TSS ≤ 80. Deviations from very typical normal values were calculated for 32 sectors for each group. There was a systematic variation of the RNFL thickness deviation around the optic nerve head in the atypical group for the GDxVCC results. The highest percentage deviation of about 96% appeared temporal with decreasing deviation towards the superior and inferior sectors, and nasal sectors exhibited a deviation of 30%. Percentage deviations from very typical RNFL values decreased with increasing TSS. No systematic variation could be found if the RNFL thickness deviation between different TSS-groups was compared with the OCT results. The ARP has a major impact on the peripapillary RNFL distribution assessed by GDx VCC; thus, the TSS should be included in the standard printout.

  19. Steady-state distributions of probability fluxes on complex networks

    NASA Astrophysics Data System (ADS)

    Chełminiak, Przemysław; Kurzyński, Michał

    2017-02-01

    We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.

  20. The effect of systematic set-up deviations on the absorbed dose distribution for left-sided breast cancer treated with respiratory gating

    NASA Astrophysics Data System (ADS)

    Edvardsson, A.; Ceberg, S.

    2013-06-01

    The aim of this study was 1) to investigate interfraction set-up uncertainties for patients treated with respiratory gating for left-sided breast cancer, 2) to investigate the effect of the inter-fraction set-up on the absorbed dose-distribution for the target and organs at risk (OARs) and 3) optimize the set-up correction strategy. By acquiring multiple set-up images the systematic set-up deviation was evaluated. The effect of the systematic set-up deviation on the absorbed dose distribution was evaluated by 1) simulation in the treatment planning system and 2) measurements with a biplanar diode array. The set-up deviations could be decreased using a no action level correction strategy. Not using the clinically implemented adaptive maximum likelihood factor for the gating patients resulted in better set-up. When the uncorrected set-up deviations were simulated the average mean absorbed dose was increased from 1.38 to 2.21 Gy for the heart, 4.17 to 8.86 Gy to the left anterior descending coronary artery and 5.80 to 7.64 Gy to the left lung. Respiratory gating can induce systematic set-up deviations which would result in increased mean absorbed dose to the OARs if not corrected for and should therefore be corrected for by an appropriate correction strategy.

  1. A prevalence-based association test for case-control studies.

    PubMed

    Ryckman, Kelli K; Jiang, Lan; Li, Chun; Bartlett, Jacquelaine; Haines, Jonathan L; Williams, Scott M

    2008-11-01

    Genetic association is often determined in case-control studies by the differential distribution of alleles or genotypes. Recent work has demonstrated that association can also be assessed by deviations from the expected distributions of alleles or genotypes. Specifically, multiple methods motivated by the principles of Hardy-Weinberg equilibrium (HWE) have been developed. However, these methods do not take into account many of the assumptions of HWE. Therefore, we have developed a prevalence-based association test (PRAT) as an alternative method for detecting association in case-control studies. This method, also motivated by the principles of HWE, uses an estimated population allele frequency to generate expected genotype frequencies instead of using the case and control frequencies separately. Our method often has greater power, under a wide variety of genetic models, to detect association than genotypic, allelic or Cochran-Armitage trend association tests. Therefore, we propose PRAT as a powerful alternative method of testing for association.

  2. Investigations of CuFeS{sub 2} semiconductor mineral from ocean rift hydrothermal vent fields by Cu NMR in a local field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matukhin, V. L.; Pogoreltsev, A. I.; Gavrilenko, A. N., E-mail: ang-2000@mail.ru

    The results of investigating natural samples of chalcopyrite mineral CuFeS{sub 2} from massive oceanic sulfide ores of the Mid-Atlantic ridge by the {sup 63}Cu nuclear magnetic resonance (NMR {sup 63}Cu) in a local field at room temperature are presented. The significant width of the resonance lines found in the {sup 63}Cu NMR spectrum directly testifies to a wide distribution of local magnetic and electric fields in the investigated chalcopyrite samples. This distribution can be the consequence of an appreciable deviation of the structure of the investigated chalcopyrite samples from the stoichiometric one. The obtained results show that the pulsed {supmore » 63}Cu NMR can be an efficient method for studying the physical properties of deep-water polymetallic sulfides of the World Ocean.« less

  3. Temporal Structure of Volatility Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Stanley, H. Eugene; Havlin, Shlomo

    Volatility fluctuations are of great importance for the study of financial markets, and the temporal structure is an essential feature of fluctuations. To explore the temporal structure, we employ a new approach based on the return interval, which is defined as the time interval between two successive volatility values that are above a given threshold. We find that the distribution of the return intervals follows a scaling law over a wide range of thresholds, and over a broad range of sampling intervals. Moreover, this scaling law is universal for stocks of different countries, for commodities, for interest rates, and for currencies. However, further and more detailed analysis of the return intervals shows some systematic deviations from the scaling law. We also demonstrate a significant memory effect in the return intervals time organization. We find that the distribution of return intervals is strongly related to the correlations in the volatility.

  4. Percentiles of the run-length distribution of the Exponentially Weighted Moving Average (EWMA) median chart

    NASA Astrophysics Data System (ADS)

    Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.

    2017-09-01

    Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.

  5. Comparing Standard Deviation Effects across Contexts

    ERIC Educational Resources Information Center

    Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.

    2017-01-01

    Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…

  6. Efficiency and large deviations in time-asymmetric stochastic heat engines

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...

    2014-10-24

    In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less

  7. Control Strategies for Guided Collective Motion

    DTIC Science & Technology

    2015-01-30

    Control, Atlanta, GA, USA, December 2010, pp. 5468-5473. [19] C. Rorres and H. Anton , “ Elementary linear algebra applications version,” 9th Edition...work addresses and analyses deviated linear cyclic pursuit in which an Distribution Code A: Approved for public release, distribution is unlimited...Pursuit 6. D. Mukherjee and D. Ghose: Deviated Linear Cyclic Pursuit 7. D. Mukherjee and D. Ghose; On Synchronous and Asynchronous Heterogeneous Cyclic

  8. More reliable inference for the dissimilarity index of segregation

    PubMed Central

    Allen, Rebecca; Burgess, Simon; Davidson, Russell; Windmeijer, Frank

    2015-01-01

    Summary The most widely used measure of segregation is the so‐called dissimilarity index. It is now well understood that this measure also reflects randomness in the allocation of individuals to units (i.e. it measures deviations from evenness, not deviations from randomness). This leads to potentially large values of the segregation index when unit sizes and/or minority proportions are small, even if there is no underlying systematic segregation. Our response to this is to produce adjustments to the index, based on an underlying statistical model. We specify the assignment problem in a very general way, with differences in conditional assignment probabilities underlying the resulting segregation. From this, we derive a likelihood ratio test for the presence of any systematic segregation, and bias adjustments to the dissimilarity index. We further develop the asymptotic distribution theory for testing hypotheses concerning the magnitude of the segregation index and show that the use of bootstrap methods can improve the size and power properties of test procedures considerably. We illustrate these methods by comparing dissimilarity indices across school districts in England to measure social segregation. PMID:27774035

  9. Back in the saddle: large-deviation statistics of the cosmic log-density field

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.

    2016-08-01

    We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.

  10. Locating waterfowl observations on aerial surveys

    USGS Publications Warehouse

    Butler, W.I.; Hodges, J.I.; Stehn, R.A.

    1995-01-01

    We modified standard aerial survey data collection to obtain the geographic location for each waterfowl observation on surveys in Alaska during 1987-1993. Using transect navigation with CPS (global positioning system), data recording on continuously running tapes, and a computer data input program, we located observations with an average deviation along transects of 214 m. The method provided flexibility in survey design and data analysis. Although developed for geese nesting near the coast of the Yukon-Kuskokwim Delta, the methods are widely applicable and were used on other waterfowl surveys in Alaska to map distribution and relative abundance of waterfowl. Accurate location data with GIS analysis and display may improve precision and usefulness of data from any aerial transect survey.

  11. [Characteristics of the genetic structure of parasite and host populations by the example of helminthes from moor frog Rana arvalis Nilsson].

    PubMed

    Zhigalev, O N

    2010-01-01

    The genetic structure of populations of four helminth species from moor frog Rana arvalis, in comparison with the population-genetic structure of the host, has been studied with the gel-electrophoresis method. As compared with the host, parasites are characterized by more distinct deviation from the balance of genotypic frequencies and higher level of interpopulation genetic differences. The genetic variability indices in the three of four frog helminthes examined are lower than those in the host. Moreover, these indices are lower than the average indices typical of free-living invertebrates; this fact contradicts the opinion on polyhostality of these helminthes and their wide distribution.

  12. Large-visual-angle microstructure inspired from quantitative design of Morpho butterflies' lamellae deviation using the FDTD/PSO method.

    PubMed

    Wang, Wanlin; Zhang, Wang; Chen, Weixin; Gu, Jiajun; Liu, Qinglei; Deng, Tao; Zhang, Di

    2013-01-15

    The wide angular range of the treelike structure in Morpho butterfly scales was investigated by finite-difference time-domain (FDTD)/particle-swarm-optimization (PSO) analysis. Using the FDTD method, different parameters in the Morpho butterflies' treelike structure were studied and their contributions to the angular dependence were analyzed. Then a wide angular range was realized by the PSO method from quantitatively designing the lamellae deviation (Δy), which was a crucial parameter with angular range. The field map of the wide-range reflection in a large area was given to confirm the wide angular range. The tristimulus values and corresponding color coordinates for various viewing directions were calculated to confirm the blue color in different observation angles. The wide angular range realized by the FDTD/PSO method will assist us in understanding the scientific principles involved and also in designing artificial optical materials.

  13. Numerical investigation of the effect of net charge injection on the electric field deviation in a TE CO2 laser

    NASA Astrophysics Data System (ADS)

    Jahanianl, Nahid; Aram, Majid; Morshedian, Nader; Mehramiz, Ahmad

    2018-03-01

    In this report, the distribution of and deviation in the electric field were investigated in the active medium of a TE CO2 laser. The variation in the electric field is due to injection of net electron and proton charges as a plasma generator. The charged-particles beam density is assumed to be Gaussian. The electric potential and electric field distribution were simulated by solving Poisson’s equation using the SOR numerical method. The minimum deviation of the electric field obtained was about 2.2% and 6% for the electrons and protons beams, respectively, for a charged-particles beam-density of 106 cm-3. This result was obtained for a system geometry ensuring a mean-free-path of the particles beam of 15 mm. It was also found that the field deviation increases for a the mean-free-path smaller than that or larger than 25 mm. Moreover, the electric field deviation decreases when the electrons beam density exceeds 106 cm-3.

  14. New Analysis Scheme of Flow-Acoustic Coupling for Gas Ultrasonic Flowmeter with Vortex near the Transducer.

    PubMed

    Sun, Yanzhao; Zhang, Tao; Zheng, Dandan

    2018-04-10

    Ultrasonic flowmeters with a small or medium diameter are widely used in process industries. The flow field disturbance on acoustic propagation caused by a vortex near the transducer inside the sensor as well as the mechanism and details of flow-acoustic interaction are needed to strengthen research. For that reason, a new hybrid scheme is proposed; the theories of computational fluid dynamics (CFD), wave acoustics, and ray acoustics are used comprehensively by a new step-by-step method. The flow field with a vortex near the transducer, and its influence on sound propagation, receiving, and flowmeter performance are analyzed in depth. It was found that, firstly, the velocity and vortex intensity distribution were asymmetric on the sensor cross-section and acoustic path. Secondly, when passing through the vortex zone, the central ray trajectory was deflected significantly. The sound pressure on the central line of the sound path also changed. Thirdly, the pressure deviation becomes larger with as the flow velocity increases. The deviation was up to 17% for different velocity profiles in a range of 0.6 m/s to 53 m/s. Lastly, in comparison to the theoretical value, the relative deviation of the instrument coefficient for the velocity profile with a vortex near the transducer reached up to -17%. In addition, the rationality of the simulation was proved by experiments.

  15. New Analysis Scheme of Flow-Acoustic Coupling for Gas Ultrasonic Flowmeter with Vortex near the Transducer

    PubMed Central

    Zhang, Tao; Zheng, Dandan

    2018-01-01

    Ultrasonic flowmeters with a small or medium diameter are widely used in process industries. The flow field disturbance on acoustic propagation caused by a vortex near the transducer inside the sensor as well as the mechanism and details of flow-acoustic interaction are needed to strengthen research. For that reason, a new hybrid scheme is proposed; the theories of computational fluid dynamics (CFD), wave acoustics, and ray acoustics are used comprehensively by a new step-by-step method. The flow field with a vortex near the transducer, and its influence on sound propagation, receiving, and flowmeter performance are analyzed in depth. It was found that, firstly, the velocity and vortex intensity distribution were asymmetric on the sensor cross-section and acoustic path. Secondly, when passing through the vortex zone, the central ray trajectory was deflected significantly. The sound pressure on the central line of the sound path also changed. Thirdly, the pressure deviation becomes larger with as the flow velocity increases. The deviation was up to 17% for different velocity profiles in a range of 0.6 m/s to 53 m/s. Lastly, in comparison to the theoretical value, the relative deviation of the instrument coefficient for the velocity profile with a vortex near the transducer reached up to −17%. In addition, the rationality of the simulation was proved by experiments. PMID:29642577

  16. A critical appraisal of the zero-multipole method: Structural, thermodynamic, dielectric, and dynamical properties of a water system.

    PubMed

    Wang, Han; Nakamura, Haruki; Fukuda, Ikuo

    2016-03-21

    We performed extensive and strict tests for the reliability of the zero-multipole (summation) method (ZMM), which is a method for estimating the electrostatic interactions among charged particles in a classical physical system, by investigating a set of various physical quantities. This set covers a broad range of water properties, including the thermodynamic properties (pressure, excess chemical potential, constant volume/pressure heat capacity, isothermal compressibility, and thermal expansion coefficient), dielectric properties (dielectric constant and Kirkwood-G factor), dynamical properties (diffusion constant and viscosity), and the structural property (radial distribution function). We selected a bulk water system, the most important solvent, and applied the widely used TIP3P model to this test. In result, the ZMM works well for almost all cases, compared with the smooth particle mesh Ewald (SPME) method that was carefully optimized. In particular, at cut-off radius of 1.2 nm, the recommended choices of ZMM parameters for the TIP3P system are α ≤ 1 nm(-1) for the splitting parameter and l = 2 or l = 3 for the order of the multipole moment. We discussed the origin of the deviations of the ZMM and found that they are intimately related to the deviations of the equilibrated densities between the ZMM and SPME, while the magnitude of the density deviations is very small.

  17. Tweedie convergence: a mathematical basis for Taylor's power law, 1/f noise, and multifractality.

    PubMed

    Kendal, Wayne S; Jørgensen, Bent

    2011-12-01

    Plants and animals of a given species tend to cluster within their habitats in accordance with a power function between their mean density and the variance. This relationship, Taylor's power law, has been variously explained by ecologists in terms of animal behavior, interspecies interactions, demographic effects, etc., all without consensus. Taylor's law also manifests within a wide range of other biological and physical processes, sometimes being referred to as fluctuation scaling and attributed to effects of the second law of thermodynamics. 1/f noise refers to power spectra that have an approximately inverse dependence on frequency. Like Taylor's law these spectra manifest from a wide range of biological and physical processes, without general agreement as to cause. One contemporary paradigm for 1/f noise has been based on the physics of self-organized criticality. We show here that Taylor's law (when derived from sequential data using the method of expanding bins) implies 1/f noise, and that both phenomena can be explained by a central limit-like effect that establishes the class of Tweedie exponential dispersion models as foci for this convergence. These Tweedie models are probabilistic models characterized by closure under additive and reproductive convolution as well as under scale transformation, and consequently manifest a variance to mean power function. We provide examples of Taylor's law, 1/f noise, and multifractality within the eigenvalue deviations of the Gaussian unitary and orthogonal ensembles, and show that these deviations conform to the Tweedie compound Poisson distribution. The Tweedie convergence theorem provides a unified mathematical explanation for the origin of Taylor's law and 1/f noise applicable to a wide range of biological, physical, and mathematical processes, as well as to multifractality.

  18. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  19. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  20. Study on probability distribution of prices in electricity market: A case study of zhejiang province, china

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.

    2009-05-01

    The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.

  1. Preliminary analysis of hot spot factors in an advanced reactor for space electric power systems

    NASA Technical Reports Server (NTRS)

    Lustig, P. H.; Holms, A. G.; Davison, H. W.

    1973-01-01

    The maximum fuel pin temperature for nominal operation in an advanced power reactor is 1370 K. Because of possible nitrogen embrittlement of the clad, the fuel temperature was limited to 1622 K. Assuming simultaneous occurrence of the most adverse conditions a deterministic analysis gave a maximum fuel temperature of 1610 K. A statistical analysis, using a synthesized estimate of the standard deviation for the highest fuel pin temperature, showed probabilities of 0.015 of that pin exceeding the temperature limit by the distribution free Chebyshev inequality and virtually nil assuming a normal distribution. The latter assumption gives a 1463 K maximum temperature at 3 standard deviations, the usually assumed cutoff. Further, the distribution and standard deviation of the fuel-clad gap are the most significant contributions to the uncertainty in the fuel temperature.

  2. Seismic velocity deviation log: An effective method for evaluating spatial distribution of reservoir pore types

    NASA Astrophysics Data System (ADS)

    Shirmohamadi, Mohamad; Kadkhodaie, Ali; Rahimpour-Bonab, Hossain; Faraji, Mohammad Ali

    2017-04-01

    Velocity deviation log (VDL) is a synthetic log used to determine pore types in reservoir rocks based on a combination of the sonic log with neutron-density logs. The current study proposes a two step approach to create a map of porosity and pore types by integrating the results of petrographic studies, well logs and seismic data. In the first step, velocity deviation log was created from the combination of the sonic log with the neutron-density log. The results allowed identifying negative, zero and positive deviations based on the created synthetic velocity log. Negative velocity deviations (below - 500 m/s) indicate connected or interconnected pores and fractures, while positive deviations (above + 500 m/s) are related to isolated pores. Zero deviations in the range of [- 500 m/s, + 500 m/s] are in good agreement with intercrystalline and microporosities. The results of petrographic studies were used to validate the main pore type derived from velocity deviation log. In the next step, velocity deviation log was estimated from seismic data by using a probabilistic neural network model. For this purpose, the inverted acoustic impedance along with the amplitude based seismic attributes were formulated to VDL. The methodology is illustrated by performing a case study from the Hendijan oilfield, northwestern Persian Gulf. The results of this study show that integration of petrographic, well logs and seismic attributes is an instrumental way for understanding the spatial distribution of main reservoir pore types.

  3. U.S. Navy Marine Climatic Atlas of the World. Volume IX. World-Wide Means and Standard Deviations

    DTIC Science & Technology

    1981-10-01

    TITLE (..d SobtII,) S. TYPE OF REPORT & PERIOD COVERED U. S. Navy Marine Climatic Atlas of the World Volume IX World-wide Means and Standard Reference...Ives the best estimate of the population standard deviations. The means, , are com~nuted from: EX IIN I 90 80 70 60" 50’ 40, 30 20 10 0 1070 T- VErr ...or 10%, whichever is greater Since the mean ice limit approximates the minus two de l temperature isopleth, this analyzed lower limit was Wave Heights

  4. Convex hulls of random walks in higher dimensions: A large-deviation study

    NASA Astrophysics Data System (ADS)

    Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.

    2017-12-01

    The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .

  5. The phonatory deviation diagram: a novel objective measurement of vocal function.

    PubMed

    Madazio, Glaucya; Leão, Sylvia; Behlau, Mara

    2011-01-01

    To identify the discriminative characteristics of the phonatory deviation diagram (PDD) in rough, breathy and tense voices. One hundred and ninety-six samples of normal and dysphonic voices from adults were submitted to perceptual auditory evaluation, focusing on the predominant vocal quality and the degree of deviation. Acoustic analysis was performed with the VoxMetria (CTS Informatica). Significant differences were observed between the dysphonic and normal groups (p < 0.001), and also between the breathy and rough samples (p = 0.044) and the breathy and tense samples (p < 0.001). All normal voices were positioned in the inferior left quadrant, 45% of the rough voices in the inferior right quadrant, 52.6% of the breathy voices in the superior right quadrant and 54.3% of the tense voices in the inferior left quadrant of the PDD. In the inferior left quadrant, 93.8% of voices with no deviation were located and 72.7% of voices with mild deviation; voices with moderate deviation were distributed in the inferior and superior right quadrants, the latter ones containing the most deviant voices and 80% of voices with severe deviation. The PDD was able to discriminate normal from dysphonic voices, and the distribution was related to the type and degree of voice alteration. Copyright © 2011 S. Karger AG, Basel.

  6. Statistical Tests Black swans or dragon-kings? A simple test for deviations from the power law★

    NASA Astrophysics Data System (ADS)

    Janczura, J.; Weron, R.

    2012-05-01

    We develop a simple test for deviations from power law tails. Actually, from the tails of any distribution. We use this test - which is based on the asymptotic properties of the empirical distribution function - to answer the question whether great natural disasters, financial crashes or electricity price spikes should be classified as dragon-kings or `only' as black swans.

  7. Distribution Development for STORM Ingestion Input Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, John

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr tomore » a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e -4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e -4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)« less

  8. Statistical analysis of the 70 meter antenna surface distortions

    NASA Technical Reports Server (NTRS)

    Kiedron, K.; Chian, C. T.; Chuang, K. L.

    1987-01-01

    Statistical analysis of surface distortions of the 70 meter NASA/JPL antenna, located at Goldstone, was performed. The purpose of this analysis is to verify whether deviations due to gravity loading can be treated as quasi-random variables with normal distribution. Histograms of the RF pathlength error distribution for several antenna elevation positions were generated. The results indicate that the deviations from the ideal antenna surface are not normally distributed. The observed density distribution for all antenna elevation angles is taller and narrower than the normal density, which results in large positive values of kurtosis and a significant amount of skewness. The skewness of the distribution changes from positive to negative as the antenna elevation changes from zenith to horizon.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chin-Cheng, E-mail: chen.ccc@gmail.com; Chang, Chang; Mah, Dennis

    Purpose: The spot characteristics for proton pencil beam scanning (PBS) were measured and analyzed over a 16 month period, which included one major site configuration update and six cyclotron interventions. The results provide a reference to establish the quality assurance (QA) frequency and tolerance for proton pencil beam scanning. Methods: A simple treatment plan was generated to produce an asymmetric 9-spot pattern distributed throughout a field of 16 × 18 cm for each of 18 proton energies (100.0–226.0 MeV). The delivered fluence distribution in air was measured using a phosphor screen based CCD camera at three planes perpendicular to themore » beam line axis (x-ray imaging isocenter and up/down stream 15.0 cm). The measured fluence distributions for each energy were analyzed using in-house programs which calculated the spot sizes and positional deviations of the Gaussian shaped spots. Results: Compared to the spot characteristic data installed into the treatment planning system, the 16-month averaged deviations of the measured spot sizes at the isocenter plane were 2.30% and 1.38% in the IEC gantry x and y directions, respectively. The maximum deviation was 12.87% while the minimum deviation was 0.003%, both at the upstream plane. After the collinearity of the proton and x-ray imaging system isocenters was optimized, the positional deviations of the spots were all within 1.5 mm for all three planes. During the site configuration update, spot positions were found to deviate by 6 mm until the tuning parameters file was properly restored. Conclusions: For this beam delivery system, it is recommended to perform a spot size and position check at least monthly and any time after a database update or cyclotron intervention occurs. A spot size deviation tolerance of <15% can be easily met with this delivery system. Deviations of spot positions were <2 mm at any plane up/down stream 15 cm from the isocenter.« less

  10. Technical Note: Spot characteristic stability for proton pencil beam scanning.

    PubMed

    Chen, Chin-Cheng; Chang, Chang; Moyers, Michael F; Gao, Mingcheng; Mah, Dennis

    2016-02-01

    The spot characteristics for proton pencil beam scanning (PBS) were measured and analyzed over a 16 month period, which included one major site configuration update and six cyclotron interventions. The results provide a reference to establish the quality assurance (QA) frequency and tolerance for proton pencil beam scanning. A simple treatment plan was generated to produce an asymmetric 9-spot pattern distributed throughout a field of 16 × 18 cm for each of 18 proton energies (100.0-226.0 MeV). The delivered fluence distribution in air was measured using a phosphor screen based CCD camera at three planes perpendicular to the beam line axis (x-ray imaging isocenter and up/down stream 15.0 cm). The measured fluence distributions for each energy were analyzed using in-house programs which calculated the spot sizes and positional deviations of the Gaussian shaped spots. Compared to the spot characteristic data installed into the treatment planning system, the 16-month averaged deviations of the measured spot sizes at the isocenter plane were 2.30% and 1.38% in the IEC gantry x and y directions, respectively. The maximum deviation was 12.87% while the minimum deviation was 0.003%, both at the upstream plane. After the collinearity of the proton and x-ray imaging system isocenters was optimized, the positional deviations of the spots were all within 1.5 mm for all three planes. During the site configuration update, spot positions were found to deviate by 6 mm until the tuning parameters file was properly restored. For this beam delivery system, it is recommended to perform a spot size and position check at least monthly and any time after a database update or cyclotron intervention occurs. A spot size deviation tolerance of <15% can be easily met with this delivery system. Deviations of spot positions were <2 mm at any plane up/down stream 15 cm from the isocenter.

  11. Structure and coarsening at the surface of a dry three-dimensional aqueous foam.

    PubMed

    Roth, A E; Chen, B G; Durian, D J

    2013-12-01

    We utilize total-internal reflection to isolate the two-dimensional surface foam formed at the planar boundary of a three-dimensional sample. The resulting images of surface Plateau borders are consistent with Plateau's laws for a truly two-dimensional foam. Samples are allowed to coarsen into a self-similar scaling state where statistical distributions appear independent of time, except for an overall scale factor. There we find that statistical measures of side number distributions, size-topology correlations, and bubble shapes are all very similar to those for two-dimensional foams. However, the size number distribution is slightly broader, and the shapes are slightly more elongated. A more obvious difference is that T2 processes now include the creation of surface bubbles, due to rearrangement in the bulk, and von Neumann's law is dramatically violated for individual bubbles. But nevertheless, our most striking finding is that von Neumann's law appears to holds on average, namely, the average rate of area change for surface bubbles appears to be proportional to the number of sides minus six, but with individual bubbles showing a wide distribution of deviations from this average behavior.

  12. An Overview of Distributed Microgrid State Estimation and Control for Smart Grids

    PubMed Central

    Rana, Md Masud; Li, Li

    2015-01-01

    Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method. PMID:25686316

  13. Optimization of a jet-propelled particle injection system for the uniform transdermal delivery of drug/vaccine.

    PubMed

    Liu, Yi; Kendall, Mark A F

    2007-08-01

    A jet-propelled particle injection system, the biolistics, has been developed and employed to accelerate micro-particles for transdermal drug delivery. We have examined a prototype biolistic device employing a converging-diverging supersonic nozzle (CDSN), and found that the micro-particles were delivered with a wide velocity range (200-800 m/s) and spatial distribution. To provide a controllable system for transdermal drug delivery, we present a contoured shock-tube (CST) concept and its embodiment device. The CST configuration utilizes a quasi-steady, quasi-one dimensional and shock-free supersonic flow to deliver the micro-particles with an almost uniform velocity (the mean velocity and the standard deviation, 699 +/- 4.7 m/s) and spatial distribution. The transient gas and particle dynamics in both prototype devices are interrogated with the validated computational fluid dynamics (CFD) approach. The predicted results for static pressure and Mach number histories, gas flow structures, particle velocity distributions and gas-particle interactions are presented and interpreted. The implications for clinical uses are discussed. (c) 2007 Wiley Periodicals, Inc.

  14. Visualizing excipient composition and homogeneity of Compound Liquorice Tablets by near-infrared chemical imaging

    NASA Astrophysics Data System (ADS)

    Wu, Zhisheng; Tao, Ou; Cheng, Wei; Yu, Lu; Shi, Xinyuan; Qiao, Yanjiang

    2012-02-01

    This study demonstrated that near-infrared chemical imaging (NIR-CI) was a promising technology for visualizing the spatial distribution and homogeneity of Compound Liquorice Tablets. The starch distribution (indirectly, plant extraction) could be spatially determined using basic analysis of correlation between analytes (BACRA) method. The correlation coefficients between starch spectrum and spectrum of each sample were greater than 0.95. Depending on the accurate determination of starch distribution, a method to determine homogeneous distribution was proposed by histogram graph. The result demonstrated that starch distribution in sample 3 was relatively heterogeneous according to four statistical parameters. Furthermore, the agglomerates domain in each tablet was detected using score image layers of principal component analysis (PCA) method. Finally, a novel method named Standard Deviation of Macropixel Texture (SDMT) was introduced to detect agglomerates and heterogeneity based on binary image. Every binary image was divided into different sizes length of macropixel and the number of zero values in each macropixel was counted to calculate standard deviation. Additionally, a curve fitting graph was plotted on the relationship between standard deviation and the size length of macropixel. The result demonstrated the inter-tablet heterogeneity of both starch and total compounds distribution, simultaneously, the similarity of starch distribution and the inconsistency of total compounds distribution among intra-tablet were signified according to the value of slope and intercept parameters in the curve.

  15. Stochastic analysis of uncertain thermal parameters for random thermal regime of frozen soil around a single freezing pipe

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei

    2018-03-01

    The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.

  16. Locus-specific ancestry to detect recent response to selection in admixed Swiss Fleckvieh cattle.

    PubMed

    Khayatzadeh, N; Mészáros, G; Utsunomiya, Y T; Garcia, J F; Schnyder, U; Gredler, B; Curik, I; Sölkner, J

    2016-12-01

    Identification of selection signatures is one of the current endeavors of evolutionary genetics. Admixed populations may be used to infer post-admixture selection. We calculated local ancestry for Swiss Fleckvieh, a composite of Simmental (SI) and Red Holstein Friesian (RHF), to infer such signals. Illumina Bovine SNP50 BeadChip data for 300 admixed, 88 SI and 97 RHF bulls were used. The average RHF ancestry across the whole genome was 0.70. To identify regions with high deviation from average, we considered two significance thresholds, based on a permutation test and extreme deviation from normal distribution. Regions on chromosomes 13 (46.3-47.3 Mb) and 18 (18.7-25.9 Mb) passed both thresholds in the direction of increased SI. Extended haplotype homozygosity within (iHS) and between (Rsb) populations was calculated to explore additional patterns of pre- and post-admixture selection signals. The Rsb score of admixed and SI was significant in a wide region of chromosome 18 (6.6-24.6 Mb) overlapped with one area of strong local ancestry deviation. FTO, with pleiotropic effect on milk and fertility, NOD2 on dairy and NKD1 and SALL1 on fertility traits are located there. Genetic differentiation of RHF and SI (F st ), an alternative indicator of pre-admixture selection in pure populations, was calculated. No considerable overlap of peaks of local ancestry deviations and F st was observed. We found two regions with significant signatures of post-admixture selection in this very young composite, applying comparatively stringent significance thresholds. The signals cover relatively large genomic areas and did not allow pinpointing of the gene(s) responsible for the apparent shift in ancestry proportions. © 2016 Stichting International Foundation for Animal Genetics.

  17. Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.

    PubMed

    Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A

    2013-11-01

    We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.

  18. Design on the wide band absorber with low density based on the particle distribution

    NASA Astrophysics Data System (ADS)

    Zheng, Dianliang; Liu, Ting; Liu, Longbin; Xu, Yonggang

    2018-04-01

    In order to widen the absorbing band, an equivalent gradient structure absorber was designed based on the particle distribution. Firstly, the electromagnetic parameter of the absorbent with uniform dispersion was tested using the vector network analyzer in 8-18 GHz. Three different equivalent materials of the spherical, square and hexagon empty shape were designed. The scattering parameters and the monostatic reflection loss (RL) of the periodic structural materials were simulated in the commercial software. Then the effective permittivity and the permeability was derived by the Nicolson-Ross-Weir algorithm and fitted by Maxwell-Garnett mixing rule. The results showed that the simulated reflectance and transmission parameters of equivalent composites with the different shapes were very close. The derived effective permittivity and permeability of the composite with different absorbent content was also close, and the average deviation was about 0.52 + j0.15 and 0.15 + j0.01 respectively. Finally, the wide band absorbing material was designed using the genetic algorithm. The optimized RL result showed that the absorbing composites with thickness 3 mm had an excellent absorbing property (RL <-10 dB) in 8-18 GHz, the equivalent absorber density could be decreased 30.7% compared with the uniform structure.

  19. The power grid AGC frequency bias coefficient online identification method based on wide area information

    NASA Astrophysics Data System (ADS)

    Wang, Zian; Li, Shiguang; Yu, Ting

    2015-12-01

    This paper propose online identification method of regional frequency deviation coefficient based on the analysis of interconnected grid AGC adjustment response mechanism of regional frequency deviation coefficient and the generator online real-time operation state by measured data through PMU, analyze the optimization method of regional frequency deviation coefficient in case of the actual operation state of the power system and achieve a more accurate and efficient automatic generation control in power system. Verify the validity of the online identification method of regional frequency deviation coefficient by establishing the long-term frequency control simulation model of two-regional interconnected power system.

  20. Standard deviation of luminance distribution affects lightness and pupillary response.

    PubMed

    Kanari, Kei; Kaneko, Hirohiko

    2014-12-01

    We examined whether the standard deviation (SD) of luminance distribution serves as information of illumination. We measured the lightness of a patch presented in the center of a scrambled-dot pattern while manipulating the SD of the luminance distribution. Results showed that lightness decreased as the SD of the surround stimulus increased. We also measured pupil diameter while viewing a similar stimulus. The pupil diameter decreased as the SD of luminance distribution of the stimuli increased. We confirmed that these results were not obtained because of the increase of the highest luminance in the stimulus. Furthermore, results of field measurements revealed a correlation between the SD of luminance distribution and illuminance in natural scenes. These results indicated that the visual system refers to the SD of the luminance distribution in the visual stimulus to estimate the scene illumination.

  1. Recruitment of local inhibitory networks by horizontal connections in layer 2/3 of ferret visual cortex.

    PubMed

    Tucker, Thomas R; Katz, Lawrence C

    2003-01-01

    To investigate how neurons in cortical layer 2/3 integrate horizontal inputs arising from widely distributed sites, we combined intracellular recording and voltage-sensitive dye imaging to visualize the spatiotemporal dynamics of neuronal activity evoked by electrical stimulation of multiple sites in visual cortex. Individual stimuli evoked characteristic patterns of optical activity, while delivering stimuli at multiple sites generated interacting patterns in the regions of overlap. We observed that neurons in overlapping regions received convergent horizontal activation that generated nonlinear responses due to the emergence of large inhibitory potentials. The results indicate that co-activation of multiple sets of horizontal connections recruit strong inhibition from local inhibitory networks, causing marked deviations from simple linear integration.

  2. Statistical mechanics of two-dimensional shuffled foams: prediction of the correlation between geometry and topology.

    PubMed

    Durand, Marc; Käfer, Jos; Quilliet, Catherine; Cox, Simon; Talebi, Shirin Ataei; Graner, François

    2011-10-14

    We propose an analytical model for the statistical mechanics of shuffled two-dimensional foams with moderate bubble size polydispersity. It predicts without any adjustable parameters the correlations between the number of sides n of the bubbles (topology) and their areas A (geometry) observed in experiments and numerical simulations of shuffled foams. Detailed statistics show that in shuffled cellular patterns n correlates better with √A (as claimed by Desch and Feltham) than with A (as claimed by Lewis and widely assumed in the literature). At the level of the whole foam, standard deviations Δn and ΔA are in proportion. Possible applications include correlations of the detailed distributions of n and A, three-dimensional foams, and biological tissues.

  3. Challenges in Optical Emission Spectroscopy

    NASA Astrophysics Data System (ADS)

    Siepa, Sarah; Berger, Birk; Schulze, Julian; Schuengel, Edmund; von Keudell, Achim

    2016-09-01

    Collisional-radiative models (CRMs) are widely used to investigate plasma properties such as electron density, electron temperature and the form of the electron energy distribution function. In this work an extensive CRM for argon is presented, which models 30 excited states and various kinds of processes including electron impact excitation/de-excitation, radiation and radiation trapping. The CRM is evaluated in several test cases, i.e. inductively and capacitively coupled plasmas at various pressures, powers/voltages and gas admixtures. Deviations are found between modelled and measured spectra. The escape factor as a means of describing radiation trapping is discussed as well as the cross section data for electron impact processes. This work was supported by the Ruhr University Research School PLUS, funded by Germany's Excellence Initiative [DFG GSC 98/3].

  4. Neutron Compton scattering from selectively deuterated acetanilide

    NASA Astrophysics Data System (ADS)

    Wanderlingh, U. N.; Fielding, A. L.; Middendorf, H. D.

    With the aim of developing the application of neutron Compton scattering (NCS) to molecular systems of biophysical interest, we are using the Compton spectrometer EVS at ISIS to characterize the momentum distribution of protons in peptide groups. In this contribution we present NCS measurements of the recoil peak (Compton profile) due to the amide proton in otherwise fully deuterated acetanilide (ACN), a widely studied model system for H-bonding and energy transfer in biomolecules. We obtain values for the average width of the potential well of the amide proton and its mean kinetic energy. Deviations from the Gaussian form of the Compton profile, analyzed on the basis of an expansion due to Sears, provide data relating to the Laplacian of the proton potential.

  5. Analysis of geomagnetic hourly ranges

    NASA Astrophysics Data System (ADS)

    Danskin, D. W.; Lotz, S. I.

    2015-08-01

    In an attempt to develop better forecasts of geomagnetic activity, hourly ranges of geomagnetic data are analyzed with a focus on how the data are distributed. A lognormal distribution is found to be able to characterize the magnetic data for all observatories up to moderate disturbances with each distribution controlled by the mean of the logarithm of the hourly range. In the subauroral zone, the distribution deviates from the lognormal, which is interpreted as motion of the auroral electrojet toward the equator. For most observatories, a substantial deviation from the lognormal distribution was noted at the higher values and is best modeled with a power law extrapolation, which gives estimates of the extreme values that may occur at observatories which contribute to the disturbance storm time (Dst) index and in Canada.

  6. Palus Somni - Anomalies in the correlation of Al/Si X-ray fluorescence intensity ratios and broad-spectrum visible albedos. [lunar surface mineralogy

    NASA Technical Reports Server (NTRS)

    Clark, P. E.; Andre, C. G.; Adler, I.; Weidner, J.; Podwysocki, M.

    1976-01-01

    The positive correlation between Al/Si X-ray fluorescence intensity ratios determined during the Apollo 15 lunar mission and a broad-spectrum visible albedo of the moon is quantitatively established. Linear regression analysis performed on 246 1 degree geographic cells of X-ray fluorescence intensity and visible albedo data points produced a statistically significant correlation coefficient of .78. Three distinct distributions of data were identified as (1) within one standard deviation of the regression line, (2) greater than one standard deviation below the line, and (3) greater than one standard deviation above the line. The latter two distributions of data were found to occupy distinct geographic areas in the Palus Somni region.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wulff, J; Huggins, A

    Purpose: The shape of a single beam in proton PBS influences the resulting dose distribution. Spot profiles are modelled as two-dimensional Gaussian (single/ double) distributions in treatment planning systems (TPS). Impact of slight deviations from an ideal Gaussian on resulting dose distributions is typically assumed to be small due to alleviation by multiple Coulomb scattering (MCS) in tissue and superposition of many spots. Quantitative limits are however not clear per se. Methods: A set of 1250 deliberately deformed profiles with sigma=4 mm for a Gaussian fit were constructed. Profiles and fit were normalized to the same area, resembling output calibrationmore » in the TPS. Depth-dependent MCS was considered. The deviation between deformed and ideal profiles was characterized by root-mean-squared deviation (RMSD), skewness/ kurtosis (SK) and full-width at different percentage of maximum (FWxM). The profiles were convolved with different fluence patterns (regular/ random) resulting in hypothetical dose distributions. The resulting deviations were analyzed by applying a gamma-test. Results were compared to measured spot profiles. Results: A clear correlation between pass-rate and profile metrics could be determined. The largest impact occurred for a regular fluence-pattern with increasing distance between single spots, followed by a random distribution of spot weights. The results are strongly dependent on gamma-analysis dose and distance levels. Pass-rates of >95% at 2%/2 mm and 40 mm depth (=70 MeV) could only be achieved for RMSD<10%, deviation in FWxM at 20% and root of quadratic sum of SK <0.8. As expected the results improve for larger depths. The trends were well resembled for measured spot profiles. Conclusion: All measured profiles from ProBeam sites passed the criteria. Given the fact, that beam-line tuning can result shape distortions, the derived criteria represent a useful QA tool for commissioning and design of future beam-line optics.« less

  8. Photometric Selection of a Massive Galaxy Catalog with z ≥ 0.55

    NASA Astrophysics Data System (ADS)

    Núñez, Carolina; Spergel, David N.; Ho, Shirley

    2017-02-01

    We present the development of a photometrically selected massive galaxy catalog, targeting Luminous Red Galaxies (LRGs) and massive blue galaxies at redshifts of z≥slant 0.55. Massive galaxy candidates are selected using infrared/optical color-color cuts, with optical data from the Sloan Digital Sky Survey (SDSS) and infrared data from “unWISE” forced photometry derived from the Wide-field Infrared Survey Explorer (WISE). The selection method is based on previously developed techniques to select LRGs with z> 0.5, and is optimized using receiver operating characteristic curves. The catalog contains 16,191,145 objects, selected over the full SDSS DR10 footprint. The redshift distribution of the resulting catalog is estimated using spectroscopic redshifts from the DEEP2 Galaxy Redshift Survey and photometric redshifts from COSMOS. Restframe U - B colors from DEEP2 are used to estimate LRG selection efficiency. Using DEEP2, the resulting catalog has an average redshift of z = 0.65, with a standard deviation of σ =2.0, and an average restframe of U-B=1.0, with a standard deviation of σ =0.27. Using COSMOS, the resulting catalog has an average redshift of z = 0.60, with a standard deviation of σ =1.8. We estimate 34 % of the catalog to be blue galaxies with z≥slant 0.55. An estimated 9.6 % of selected objects are blue sources with redshift z< 0.55. Stellar contamination is estimated to be 1.8%.

  9. Quantum key distribution with an efficient countermeasure against correlated intensity fluctuations in optical pulses

    NASA Astrophysics Data System (ADS)

    Yoshino, Ken-ichiro; Fujiwara, Mikio; Nakata, Kensuke; Sumiya, Tatsuya; Sasaki, Toshihiko; Takeoka, Masahiro; Sasaki, Masahide; Tajima, Akio; Koashi, Masato; Tomita, Akihisa

    2018-03-01

    Quantum key distribution (QKD) allows two distant parties to share secret keys with the proven security even in the presence of an eavesdropper with unbounded computational power. Recently, GHz-clock decoy QKD systems have been realized by employing ultrafast optical communication devices. However, security loopholes of high-speed systems have not been fully explored yet. Here we point out a security loophole at the transmitter of the GHz-clock QKD, which is a common problem in high-speed QKD systems using practical band-width limited devices. We experimentally observe the inter-pulse intensity correlation and modulation pattern-dependent intensity deviation in a practical high-speed QKD system. Such correlation violates the assumption of most security theories. We also provide its countermeasure which does not require significant changes of hardware and can generate keys secure over 100 km fiber transmission. Our countermeasure is simple, effective and applicable to wide range of high-speed QKD systems, and thus paves the way to realize ultrafast and security-certified commercial QKD systems.

  10. Three Temperature Regimes in Superconducting Photon Detectors: Quantum, Thermal and Multiple Phase-Slips as Generators of Dark Counts

    PubMed Central

    Murphy, Andrew; Semenov, Alexander; Korneev, Alexander; Korneeva, Yulia; Gol’tsman, Gregory; Bezryadin, Alexey

    2015-01-01

    We perform measurements of the switching current distributions of three w ≈ 120 nm wide, 4 nm thick NbN superconducting strips which are used for single-photon detectors. These strips are much wider than the diameter of the vortex cores, so they are classified as quasi-two-dimensional (quasi-2D). We discover evidence of macroscopic quantum tunneling by observing the saturation of the standard deviation of the switching distributions at temperatures around 2 K. We analyze our results using the Kurkijärvi-Garg model and find that the escape temperature also saturates at low temperatures, confirming that at sufficiently low temperatures, macroscopic quantum tunneling is possible in quasi-2D strips and can contribute to dark counts observed in single photon detectors. At the highest temperatures the system enters a multiple phase-slip regime. In this range single phase-slips are unable to produce dark counts and the fluctuations in the switching current are reduced. PMID:25988591

  11. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    NASA Astrophysics Data System (ADS)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-07-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.

  12. Three temperature regimes in superconducting photon detectors: quantum, thermal and multiple phase-slips as generators of dark counts.

    PubMed

    Murphy, Andrew; Semenov, Alexander; Korneev, Alexander; Korneeva, Yulia; Gol'tsman, Gregory; Bezryadin, Alexey

    2015-05-19

    We perform measurements of the switching current distributions of three w ≈ 120 nm wide, 4 nm thick NbN superconducting strips which are used for single-photon detectors. These strips are much wider than the diameter of the vortex cores, so they are classified as quasi-two-dimensional (quasi-2D). We discover evidence of macroscopic quantum tunneling by observing the saturation of the standard deviation of the switching distributions at temperatures around 2 K. We analyze our results using the Kurkijärvi-Garg model and find that the escape temperature also saturates at low temperatures, confirming that at sufficiently low temperatures, macroscopic quantum tunneling is possible in quasi-2D strips and can contribute to dark counts observed in single photon detectors. At the highest temperatures the system enters a multiple phase-slip regime. In this range single phase-slips are unable to produce dark counts and the fluctuations in the switching current are reduced.

  13. Pinocchio testing in the forensic analysis of waiting lists: using public waiting list data from Finland and Spain for testing Newcomb-Benford’s Law

    PubMed Central

    López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador

    2018-01-01

    Objective Newcomb-Benford’s Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Design Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson’s χ2, mean absolute deviation and Kuiper tests. Setting/participants Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Main outcome measures Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. Results WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ2 test). Conclusions Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. PMID:29743333

  14. Antipodal hotspot pairs on the earth

    NASA Technical Reports Server (NTRS)

    Rampino, Michael R.; Caldeira, Ken

    1992-01-01

    The results of statistical analyses performed on three published hotspot distributions suggest that significantly more hotspots occur as nearly antipodal pairs than is anticipated from a random distribution, or from their association with geoid highs and divergent plate margins. The observed number of antipodal hotspot pairs depends on the maximum allowable deviation from exact antipodality. At a maximum deviation of not greater than 700 km, 26 to 37 percent of hotspots form antipodal pairs in the published lists examined here, significantly more than would be expected from the general hotspot distribution. Two possible mechanisms that might create such a distribution include: (1) symmetry in the generation of mantle plumes, and (2) melting related to antipodal focusing of seismic energy from large-body impacts.

  15. Radio to Gamma-Ray Emission from Shell-Type Supernova Remnants: Predictions from Non-Linear Shock Acceleration Models

    NASA Technical Reports Server (NTRS)

    Baring, Matthew G.; Ellison, Donald C.; Reynolds, Stephen P.; Grenier, Isabelle A.; Goret, Philippe

    1998-01-01

    Supernova remnants (SNRs) are widely believed to be the principal source of galactic cosmic rays, produced by diffusive shock acceleration in the environs of the remnant's expanding blast wave. Such energetic particles can produce gamma-rays and lower energy photons via interactions with the ambient plasma. The recently reported observation of TeV gamma-rays from SN1006 by the CANGAROO Collaboration, combined with the fact that several unidentified EGRET sources have been associated with known radio/optical/X-ray-emitting remnants, provides powerful motivation for studying gamma-ray emission from SNRs. In this paper, we present results from a Monte Carlo simulation of non-linear shock structure and acceleration coupled with photon emission in shell-like SNRs. These non-linearities are a by-product of the dynamical influence of the accelerated cosmic rays on the shocked plasma and result in distributions of cosmic rays which deviate from pure power-laws. Such deviations are crucial to acceleration efficiency considerations and impact photon intensities and spectral shapes at all energies, producing GeV/TeV intensity ratios that are quite different from test particle predictions.

  16. Skewness and kurtosis analysis for non-Gaussian distributions

    NASA Astrophysics Data System (ADS)

    Celikoglu, Ahmet; Tirnakli, Ugur

    2018-06-01

    In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.

  17. Offshore fatigue design turbulence

    NASA Astrophysics Data System (ADS)

    Larsen, Gunner C.

    2001-07-01

    Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.

  18. Separating acoustic deviance from novelty during the first year of life: a review of event-related potential evidence

    PubMed Central

    Kushnerenko, Elena V.; Van den Bergh, Bea R. H.; Winkler, István

    2013-01-01

    Orienting to salient events in the environment is a first step in the development of attention in young infants. Electrophysiological studies have indicated that in newborns and young infants, sounds with widely distributed spectral energy, such as noise and various environmental sounds, as well as sounds widely deviating from their context elicit an event-related potential (ERP) similar to the adult P3a response. We discuss how the maturation of event-related potentials parallels the process of the development of passive auditory attention during the first year of life. Behavioral studies have indicated that the neonatal orientation to high-energy stimuli gradually changes to attending to genuine novelty and other significant events by approximately 9 months of age. In accordance with these changes, in newborns, the ERP response to large acoustic deviance is dramatically larger than that to small and moderate deviations. This ERP difference, however, rapidly decreases within first months of life and the differentiation of the ERP response to genuine novelty from that to spectrally rich but repeatedly presented sounds commences during the same period. The relative decrease of the response amplitudes elicited by high-energy stimuli may reflect development of an inhibitory brain network suppressing the processing of uninformative stimuli. Based on data obtained from healthy full-term and pre-term infants as well as from infants at risk for various developmental problems, we suggest that the electrophysiological indices of the processing of acoustic and contextual deviance may be indicative of the functioning of auditory attention, a crucial prerequisite of learning and language development. PMID:24046757

  19. BCR CDR3 length distributions differ between blood and spleen and between old and young patients, and TCR distributions can be used to detect myelodysplastic syndrome

    NASA Astrophysics Data System (ADS)

    Pickman, Yishai; Dunn-Walters, Deborah; Mehr, Ramit

    2013-10-01

    Complementarity-determining region 3 (CDR3) is the most hyper-variable region in B cell receptor (BCR) and T cell receptor (TCR) genes, and the most critical structure in antigen recognition and thereby in determining the fates of developing and responding lymphocytes. There are millions of different TCR Vβ chain or BCR heavy chain CDR3 sequences in human blood. Even now, when high-throughput sequencing becomes widely used, CDR3 length distributions (also called spectratypes) are still a much quicker and cheaper method of assessing repertoire diversity. However, distribution complexity and the large amount of information per sample (e.g. 32 distributions of the TCRα chain, and 24 of TCRβ) calls for the use of machine learning tools for full exploration. We have examined the ability of supervised machine learning, which uses computational models to find hidden patterns in predefined biological groups, to analyze CDR3 length distributions from various sources, and distinguish between experimental groups. We found that (a) splenic BCR CDR3 length distributions are characterized by low standard deviations and few local maxima, compared to peripheral blood distributions; (b) healthy elderly people's BCR CDR3 length distributions can be distinguished from those of the young; and (c) a machine learning model based on TCR CDR3 distribution features can detect myelodysplastic syndrome with approximately 93% accuracy. Overall, we demonstrate that using supervised machine learning methods can contribute to our understanding of lymphocyte repertoire diversity.

  20. Quantum shot noise in tunnel junctions

    NASA Technical Reports Server (NTRS)

    Ben-Jacob, E.; Mottola, E.; Schoen, G.

    1983-01-01

    The current and voltage fluctuations in a normal tunnel junction are calculated from microscopic theory. The power spectrum can deviate from the familiar Johnson-Nyquist form when the self-capacitance of the junction is small, at low temperatures permitting experimental verification. The deviation reflects the discrete nature of the charge transfer across the junction and should be present in a wide class of similar systems.

  1. Analytical probabilistic proton dose calculation and range uncertainties

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Hennig, P.; Oelfke, U.

    2014-03-01

    We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.

  2. Geomagnetic storms, the Dst ring-current myth and lognormal distributions

    USGS Publications Warehouse

    Campbell, W.H.

    1996-01-01

    The definition of geomagnetic storms dates back to the turn of the century when researchers recognized the unique shape of the H-component field change upon averaging storms recorded at low latitude observatories. A generally accepted modeling of the storm field sources as a magnetospheric ring current was settled about 30 years ago at the start of space exploration and the discovery of the Van Allen belt of particles encircling the Earth. The Dst global 'ring-current' index of geomagnetic disturbances, formulated in that period, is still taken to be the definitive representation for geomagnetic storms. Dst indices, or data from many world observatories processed in a fashion paralleling the index, are used widely by researchers relying on the assumption of such a magnetospheric current-ring depiction. Recent in situ measurements by satellites passing through the ring-current region and computations with disturbed magnetosphere models show that the Dst storm is not solely a main-phase to decay-phase, growth to disintegration, of a massive current encircling the Earth. Although a ring current certainly exists during a storm, there are many other field contributions at the middle-and low-latitude observatories that are summed to show the 'storm' characteristic behavior in Dst at these observatories. One characteristic of the storm field form at middle and low latitudes is that Dst exhibits a lognormal distribution shape when plotted as the hourly value amplitude in each time range. Such distributions, common in nature, arise when there are many contributors to a measurement or when the measurement is a result of a connected series of statistical processes. The amplitude-time displays of Dst are thought to occur because the many time-series processes that are added to form Dst all have their own characteristic distribution in time. By transforming the Dst time display into the equivalent normal distribution, it is shown that a storm recovery can be predicted with remarkable accuracy from measurements made during the Dst growth phase. In the lognormal formulation, the mean, standard deviation and field count within standard deviation limits become definitive Dst storm parameters.

  3. Non-Gaussian Distributions Affect Identification of Expression Patterns, Functional Annotation, and Prospective Classification in Human Cancer Genomes

    PubMed Central

    Marko, Nicholas F.; Weil, Robert J.

    2012-01-01

    Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863

  4. Pinocchio testing in the forensic analysis of waiting lists: using public waiting list data from Finland and Spain for testing Newcomb-Benford's Law.

    PubMed

    Pinilla, Jaime; López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador

    2018-05-09

    Newcomb-Benford's Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson's χ 2 , mean absolute deviation and Kuiper tests. Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ 2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ 2 test). Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...

  6. Large Deviations: Advanced Probability for Undergrads

    ERIC Educational Resources Information Center

    Rolls, David A.

    2007-01-01

    In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…

  7. Variable selection for distribution-free models for longitudinal zero-inflated count responses.

    PubMed

    Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M

    2016-07-20

    Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Modeling Geodetic Processes with Levy α-Stable Distribution and FARIMA

    NASA Astrophysics Data System (ADS)

    Montillet, Jean-Philippe; Yu, Kegen

    2015-04-01

    Over the last years the scientific community has been using the auto regressive moving average (ARMA) model in the modeling of the noise in global positioning system (GPS) time series (daily solution). This work starts with the investigation of the limit of the ARMA model which is widely used in signal processing when the measurement noise is white. Since a typical GPS time series consists of geophysical signals (e.g., seasonal signal) and stochastic processes (e.g., coloured and white noise), the ARMA model may be inappropriate. Therefore, the application of the fractional auto-regressive integrated moving average (FARIMA) model is investigated. The simulation results using simulated time series as well as real GPS time series from a few selected stations around Australia show that the FARIMA model fits the time series better than other models when the coloured noise is larger than the white noise. The second fold of this work focuses on fitting the GPS time series with the family of Levy α-stable distributions. Using this distribution, a hypothesis test is developed to eliminate effectively coarse outliers from GPS time series, achieving better performance than using the rule of thumb of n standard deviations (with n chosen empirically).

  9. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  10. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.

    2011-01-01

    A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.

  11. Hydrophobicity diversity in globular and nonglobular proteins measured with the Gini index.

    PubMed

    Carugo, Oliviero

    2017-12-01

    Amino acids and their properties are variably distributed in proteins and different compositions determine all protein features, ranging from solubility to stability and functionality. Gini index, a tool to estimate distribution uniformity, is widely used in macroeconomics and has numerous statistical applications. Here, Gini index is used to analyze the distribution of hydrophobicity in proteins and to compare hydrophobicity distribution in globular and intrinsically disordered proteins. Based on the analysis of carefully selected high-quality data sets of proteins extracted from the Protein Data Bank (http://www.rcsb.org) and from the DisProt database (http://www.disprot.org/), it is observed that hydrophobicity is distributed in a more diverse way in intrinsically disordered proteins than in folded and soluble globular proteins. This correlates with the observation that the amino acid composition deviates from the uniformity (estimate with the Shannon and the Gini-Simpson indices) more in intrinsically disordered proteins than in globular and soluble proteins. Although statistical tools tike the Gini index have received little attention in molecular biology, these results show that they allow one to estimate sequence diversity and that they are useful to delineate trends that can hardly be described, otherwise, in simple and concise ways. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Multiple scattering and the density distribution of a Cs MOT.

    PubMed

    Overstreet, K; Zabawa, P; Tallant, J; Schwettmann, A; Shaffer, J

    2005-11-28

    Multiple scattering is studied in a Cs magneto-optical trap (MOT). We use two Abel inversion algorithms to recover density distributions of the MOT from fluorescence images. Deviations of the density distribution from a Gaussian are attributed to multiple scattering.

  13. Fluctuation-dissipation relation and stationary distribution of an exactly solvable many-particle model for active biomatter far from equilibrium.

    PubMed

    Netz, Roland R

    2018-05-14

    An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium parameter α from experimental spectral response and fluctuation data.

  14. Fluctuation-dissipation relation and stationary distribution of an exactly solvable many-particle model for active biomatter far from equilibrium

    NASA Astrophysics Data System (ADS)

    Netz, Roland R.

    2018-05-01

    An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium parameter α from experimental spectral response and fluctuation data.

  15. The law of distribution of light beam direction fluctuations in telescopes. [normal density functions

    NASA Technical Reports Server (NTRS)

    Divinskiy, M. L.; Kolchinskiy, I. G.

    1974-01-01

    The distribution of deviations from mean star trail directions was studied on the basis of 105 star trails. It was found that about 93% of the trails yield a distribution in agreement with the normal law. About 4% of the star trails agree with the Charlier distribution.

  16. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  17. Characterizing optical properties and spatial heterogeneity of human ovarian tissue using spatial frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Nandy, Sreyankar; Mostafa, Atahar; Kumavor, Patrick D.; Sanders, Melinda; Brewer, Molly; Zhu, Quing

    2016-10-01

    A spatial frequency domain imaging (SFDI) system was developed for characterizing ex vivo human ovarian tissue using wide-field absorption and scattering properties and their spatial heterogeneities. Based on the observed differences between absorption and scattering images of different ovarian tissue groups, six parameters were quantitatively extracted. These are the mean absorption and scattering, spatial heterogeneities of both absorption and scattering maps measured by a standard deviation, and a fitting error of a Gaussian model fitted to normalized mean Radon transform of the absorption and scattering maps. A logistic regression model was used for classification of malignant and normal ovarian tissues. A sensitivity of 95%, specificity of 100%, and area under the curve of 0.98 were obtained using six parameters extracted from the SFDI images. The preliminary results demonstrate the diagnostic potential of the SFDI method for quantitative characterization of wide-field optical properties and the spatial distribution heterogeneity of human ovarian tissue. SFDI could be an extremely robust and valuable tool for evaluation of the ovary and detection of neoplastic changes of ovarian cancer.

  18. PREST-plus identifies pedigree errors and cryptic relatedness in the GAW18 sample using genome-wide SNP data.

    PubMed

    Sun, Lei; Dimitromanolakis, Apostolos

    2014-01-01

    Pedigree errors and cryptic relatedness often appear in families or population samples collected for genetic studies. If not identified, these issues can lead to either increased false negatives or false positives in both linkage and association analyses. To identify pedigree errors and cryptic relatedness among individuals from the 20 San Antonio Family Studies (SAFS) families and cryptic relatedness among the 157 putatively unrelated individuals, we apply PREST-plus to the genome-wide single-nucleotide polymorphism (SNP) data and analyze estimated identity-by-descent (IBD) distributions for all pairs of genotyped individuals. Based on the given pedigrees alone, PREST-plus identifies the following putative pairs: 1091 full-sib, 162 half-sib, 360 grandparent-grandchild, 2269 avuncular, 2717 first cousin, 402 half-avuncular, 559 half-first cousin, 2 half-sib+first cousin, 957 parent-offspring and 440,546 unrelated. Using the genotype data, PREST-plus detects 7 mis-specified relative pairs, with their IBD estimates clearly deviating from the null expectations, and it identifies 4 cryptic related pairs involving 7 individuals from 6 families.

  19. The Role of Constitutional Copy Number Variants in Breast Cancer

    PubMed Central

    Walker, Logan C.; Wiggins, George A.R.; Pearson, John F.

    2015-01-01

    Constitutional copy number variants (CNVs) include inherited and de novo deviations from a diploid state at a defined genomic region. These variants contribute significantly to genetic variation and disease in humans, including breast cancer susceptibility. Identification of genetic risk factors for breast cancer in recent years has been dominated by the use of genome-wide technologies, such as single nucleotide polymorphism (SNP)-arrays, with a significant focus on single nucleotide variants. To date, these large datasets have been underutilised for generating genome-wide CNV profiles despite offering a massive resource for assessing the contribution of these structural variants to breast cancer risk. Technical challenges remain in determining the location and distribution of CNVs across the human genome due to the accuracy of computational prediction algorithms and resolution of the array data. Moreover, better methods are required for interpreting the functional effect of newly discovered CNVs. In this review, we explore current and future application of SNP array technology to assess rare and common CNVs in association with breast cancer risk in humans. PMID:27600231

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  1. Observations of Solar Energetic Particles from 3He-rich Events over a Wide Range of Heliographic Longitude

    NASA Astrophysics Data System (ADS)

    Wiedenbeck, M. E.; Mason, G. M.; Cohen, C. M. S.; Nitta, N. V.; Gómez-Herrero, R.; Haggerty, D. K.

    2013-01-01

    A prevailing model for the origin of 3He-rich solar energetic particle (SEP) events attributes particle acceleration to processes associated with the reconnection between closed magnetic field lines in an active region and neighboring open field lines. The open field from the small reconnection volume then provides a path along which accelerated particles escape into a relatively narrow range of angles in the heliosphere. The narrow width (standard deviation <20°) of the distribution of X-ray flare longitudes found to be associated with 3He-rich SEP events detected at a single spacecraft at 1 AU supports this model. We report multispacecraft observations of individual 3He-rich SEP events that occurred during the solar minimum time period from 2007 January through 2011 January using instrumentation carried by the two Solar Terrestrial Relations Observatory spacecraft and the Advanced Composition Explorer. We find that detections of 3He-rich events at pairs of spacecraft are not uncommon, even when their longitudinal separation is >60°. We present the observations of the 3He-rich event of 2010 February 7, which was detected at all three spacecraft when they spanned 136° in heliographic longitude. Measured fluences of 3He in this event were found to have a strong dependence on longitude which is well fit by a Gaussian with standard deviation ~48° centered at the longitude that is connected to the source region by a nominal Parker spiral magnetic field. We discuss several mechanisms for distributing flare-accelerated particles over a wide range of heliographic longitudes including interplanetary diffusion perpendicular to the magnetic field, spreading of a compact cluster of open field lines between the active region and the source surface where the field becomes radial and opens out into the heliosphere, and distortion of the interplanetary field by a preceding coronal mass ejection. Statistical studies of additional 3He-rich events detected at multiple spacecraft will be needed to establish the relative importance of the various mechanisms.

  2. Tests for qualitative treatment-by-centre interaction using a 'pushback' procedure.

    PubMed

    Ciminera, J L; Heyse, J F; Nguyen, H H; Tukey, J W

    1993-06-15

    In multicentre clinical trials using a common protocol, the centres are usually regarded as being a fixed factor, thus allowing any treatment-by-centre interaction to be omitted from the error term for the effect of treatment. However, we feel it necessary to use the treatment-by-centre interaction as the error term if there is substantial evidence that the interaction with centres is qualitative instead of quantitative. To make allowance for the estimated uncertainties of the centre means, we propose choosing a reference value (for example, the median of the ordered array of centre means) and converting the individual centre results into standardized deviations from the reference value. The deviations are then reordered, and the results 'pushed back' by amounts appropriate for the corresponding order statistics in a sample from the relevant distribution. The pushed-back standardized deviations are then restored to the original scale. The appearance of opposite signs among the destandardized values for the various centres is then taken as 'substantial evidence' of qualitative interaction. Procedures are presented using, in any combination: (i) Gaussian, or Student's t-distribution; (ii) order-statistic medians or outward 90 per cent points of the corresponding order statistic distributions; (iii) pooling or grouping and pooling the internally estimated standard deviations of the centre means. The use of the least conservative combination--Student's t, outward 90 per cent points, grouping and pooling--is recommended.

  3. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  4. A hybrid method with deviational particles for spatial inhomogeneous plasma

    NASA Astrophysics Data System (ADS)

    Yan, Bokai

    2016-03-01

    In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in [31], a Particle in Cell method and a Macro-Micro decomposition method [3] to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. This method is applicable in all regimes and significantly more efficient compared to a PIC-DSMC method near the fluid regime.

  5. Exact Large-Deviation Statistics for a Nonequilibrium Quantum Spin Chain

    NASA Astrophysics Data System (ADS)

    Žnidarič, Marko

    2014-01-01

    We consider a one-dimensional XX spin chain in a nonequilibrium setting with a Lindblad-type boundary driving. By calculating large-deviation rate function in the thermodynamic limit, a generalization of free energy to a nonequilibrium setting, we obtain a complete distribution of current, including closed expressions for lower-order cumulants. We also identify two phase-transition-like behaviors in either the thermodynamic limit, at which the current probability distribution becomes discontinuous, or at maximal driving, when the range of possible current values changes discontinuously. In the thermodynamic limit the current has a finite upper and lower bound. We also explicitly confirm nonequilibrium fluctuation relation and show that the current distribution is the same under mapping of the coupling strength Γ→1/Γ.

  6. Genome-wide Scan of 29,141 African Americans Finds No Evidence of Directional Selection since Admixture

    PubMed Central

    Bhatia, Gaurav; Tandon, Arti; Patterson, Nick; Aldrich, Melinda C.; Ambrosone, Christine B.; Amos, Christopher; Bandera, Elisa V.; Berndt, Sonja I.; Bernstein, Leslie; Blot, William J.; Bock, Cathryn H.; Caporaso, Neil; Casey, Graham; Deming, Sandra L.; Diver, W. Ryan; Gapstur, Susan M.; Gillanders, Elizabeth M.; Harris, Curtis C.; Henderson, Brian E.; Ingles, Sue A.; Isaacs, William; De Jager, Phillip L.; John, Esther M.; Kittles, Rick A.; Larkin, Emma; McNeill, Lorna H.; Millikan, Robert C.; Murphy, Adam; Neslund-Dudas, Christine; Nyante, Sarah; Press, Michael F.; Rodriguez-Gil, Jorge L.; Rybicki, Benjamin A.; Schwartz, Ann G.; Signorello, Lisa B.; Spitz, Margaret; Strom, Sara S.; Tucker, Margaret A.; Wiencke, John K.; Witte, John S.; Wu, Xifeng; Yamamura, Yuko; Zanetti, Krista A.; Zheng, Wei; Ziegler, Regina G.; Chanock, Stephen J.; Haiman, Christopher A.; Reich, David; Price, Alkes L.

    2014-01-01

    The extent of recent selection in admixed populations is currently an unresolved question. We scanned the genomes of 29,141 African Americans and failed to find any genome-wide-significant deviations in local ancestry, indicating no evidence of selection influencing ancestry after admixture. A recent analysis of data from 1,890 African Americans reported that there was evidence of selection in African Americans after their ancestors left Africa, both before and after admixture. Selection after admixture was reported on the basis of deviations in local ancestry, and selection before admixture was reported on the basis of allele-frequency differences between African Americans and African populations. The local-ancestry deviations reported by the previous study did not replicate in our very large sample, and we show that such deviations were expected purely by chance, given the number of hypotheses tested. We further show that the previous study’s conclusion of selection in African Americans before admixture is also subject to doubt. This is because the FST statistics they used were inflated and because true signals of unusual allele-frequency differences between African Americans and African populations would be best explained by selection that occurred in Africa prior to migration to the Americas. PMID:25242497

  7. Estimating insect flight densities from attractive trap catches and flight height distributions

    USDA-ARS?s Scientific Manuscript database

    Insect species often exhibit a specific mean flight height and vertical flight distribution that approximates a normal distribution with a characteristic standard deviation (SD). Many studies in the literature report catches on passive (non-attractive) traps at several heights. These catches were us...

  8. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  9. Higher order moments of the matter distribution in scale-free cosmological simulations with large dynamic range

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1994-01-01

    We calculate reduced moments (xi bar)(sub q) of the matter density fluctuations, up to order q = 5, from counts in cells produced by particle-mesh numerical simulations with scale-free Gaussian initial conditions. We use power-law spectra P(k) proportional to k(exp n) with indices n = -3, -2, -1, 0, 1. Due to the supposed absence of characteristic times or scales in our models, all quantities are expected to depend on a single scaling variable. For each model, the moments at all times can be expressed in terms of the variance (xi bar)(sub 2), alone. We look for agreement with the hierarchical scaling ansatz, according to which ((xi bar)(sub q)) proportional to ((xi bar)(sub 2))(exp (q - 1)). For n less than or equal to -2 models, we find strong deviations from the hierarchy, which are mostly due to the presence of boundary problems in the simulations. A small, residual signal of deviation from the hierarchical scaling is however also found in n greater than or equal to -1 models. The wide range of spectra considered and the large dynamic range, with careful checks of scaling and shot-noise effects, allows us to reliably detect evolution away from the perturbation theory result.

  10. The inclusion of capillary distribution in the adiabatic tissue homogeneity model of blood flow

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zeman, V.; Darko, J.; Lee, T.-Y.; Milosevic, M. F.; Haider, M.; Warde, P.; Yeung, I. W. T.

    2001-05-01

    We have developed a non-invasive imaging tracer kinetic model for blood flow which takes into account the distribution of capillaries in tissue. Each individual capillary is assumed to follow the adiabatic tissue homogeneity model. The main strength of our new model is in its ability to quantify the functional distribution of capillaries by the standard deviation in the time taken by blood to pass through the tissue. We have applied our model to the human prostate and have tested two different types of distribution functions. Both distribution functions yielded very similar predictions for the various model parameters, and in particular for the standard deviation in transit time. Our motivation for developing this model is the fact that the capillary distribution in cancerous tissue is drastically different from in normal tissue. We believe that there is great potential for our model to be used as a prognostic tool in cancer treatment. For example, an accurate knowledge of the distribution in transit times might result in an accurate estimate of the degree of tumour hypoxia, which is crucial to the success of radiation therapy.

  11. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  12. Migration in the shearing sheet and estimates for young open cluster migration

    NASA Astrophysics Data System (ADS)

    Quillen, Alice C.; Nolting, Eric; Minchev, Ivan; De Silva, Gayandhi; Chiappini, Cristina

    2018-04-01

    Using tracer particles embedded in self-gravitating shearing sheet N-body simulations, we investigate the distance in guiding centre radius that stars or star clusters can migrate in a few orbital periods. The standard deviations of guiding centre distributions and maximum migration distances depend on the Toomre or critical wavelength and the contrast in mass surface density caused by spiral structure. Comparison between our simulations and estimated guiding radii for a few young supersolar metallicity open clusters, including NGC 6583, suggests that the contrast in mass surface density in the solar neighbourhood has standard deviation (in the surface density distribution) divided by mean of about 1/4 and larger than measured using COBE data by Drimmel and Spergel. Our estimate is consistent with a standard deviation of ˜0.07 dex in the metallicities measured from high-quality spectroscopic data for 38 young open clusters (<1 Gyr) with mean galactocentric radius 7-9 kpc.

  13. Research on motor braking-based DYC strategy for distributed electric vehicle

    NASA Astrophysics Data System (ADS)

    Zhang, Jingming; Liao, Weijie; Chen, Lei; Cui, Shumei

    2017-08-01

    In order to bring into full play the advantages of motor braking and enhance the handling stability of distributed electric vehicle, a motor braking-based direct yaw moment control (DYC) strategy was proposed. This strategy could identify whether a vehicle has under-steered or overs-steered, to calculate the direct yaw moment required for vehicle steering correction by taking the corrected yaw velocity deviation and slip-angle deviation as control variables, and exert motor braking moment on the target wheels to perform correction in the manner of differential braking. For validation of the results, a combined simulation platform was set up finally to simulate the motor braking control strategy proposed. As shown by the results, the motor braking-based DYC strategy timely adjusted the motor braking moment and hydraulic braking moment on the target wheels, and corrected the steering deviation and sideslip of the vehicle in unstable state, improving the handling stability.

  14. Determining Normal-Distribution Tolerance Bounds Graphically

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.

  15. Quantifying Proportional Variability

    PubMed Central

    Heath, Joel P.; Borowski, Peter

    2013-01-01

    Real quantities can undergo such a wide variety of dynamics that the mean is often a meaningless reference point for measuring variability. Despite their widespread application, techniques like the Coefficient of Variation are not truly proportional and exhibit pathological properties. The non-parametric measure Proportional Variability (PV) [1] resolves these issues and provides a robust way to summarize and compare variation in quantities exhibiting diverse dynamical behaviour. Instead of being based on deviation from an average value, variation is simply quantified by comparing the numbers to each other, requiring no assumptions about central tendency or underlying statistical distributions. While PV has been introduced before and has already been applied in various contexts to population dynamics, here we present a deeper analysis of this new measure, derive analytical expressions for the PV of several general distributions and present new comparisons with the Coefficient of Variation, demonstrating cases in which PV is the more favorable measure. We show that PV provides an easily interpretable approach for measuring and comparing variation that can be generally applied throughout the sciences, from contexts ranging from stock market stability to climate variation. PMID:24386334

  16. Winter climatic condictions in Andalusia (southern Spain) during the Dalton Minimum from documentary sources.

    NASA Astrophysics Data System (ADS)

    Rodrigo, Fernando S.

    2010-05-01

    In this work, a reconstruction of winter rainfall and temperature in Andalusia (southern Iberia Peninsula) during the period 1750-1850 is presented. The reconstruction is based on the analysis of a wide variety of documentary data. This period is interesting because it is characterized by a minimum in the solar irradiance (Dalton Minimum, around 1800), as well as intense volcanic activity (for instance, the eruption of the Tambora in 1815), when the increasing atmospheric CO2 concentrations were of minor importance. The reconstruction methodology is based on accounting the number of extreme events in past, and inferring mean value and standard deviation using the assumption of normal distribution for the climate variables. Results are compared with the behaviour of regional series for the reference period 1960-1990. The comparison of the distribution functions corresponding to 1790-1820 and 1960-1990 periods indicates that during the Dalton Minimum the frequency of droughts and warm winters was lesser than during the reference period, while the frequencies of wet and cold winters were similar. Future research work is outlined.

  17. Effect of inhomogeneous Schottky barrier height of SnO2 nanowires device

    NASA Astrophysics Data System (ADS)

    Amorim, Cleber A.; Bernardo, Eric P.; Leite, Edson R.; Chiquito, Adenilson J.

    2018-05-01

    The current–voltage (I–V) characteristics of metal–semiconductor junction (Au–Ni/SnO2/Au–Ni) Schottky barrier in SnO2 nanowires were investigated over a wide temperature range. By using the Schottky–Mott model, the zero bias barrier height Φ B was estimated from I–V characteristics, and it was found to increase with increasing temperature; on the other hand the ideality factor (n) was found to decrease with increasing temperature. The variation in the Schottky barrier and n was attributed to the spatial inhomogeneity of the Schottky barrier height. The experimental I–V characteristics exhibited a Gaussian distribution having mean barrier heights {\\overline{{{Φ }}}}B of 0.30 eV and standard deviation σ s of 60 meV. Additionally, the Richardson modified constant was obtained to be 70 A cm‑2 K‑2, leading to an effective mass of 0.58m 0. Consequently, the temperature dependence of I–V characteristics of the SnO2 nanowire devices can be successfully explained on the Schottky–Mott theory framework taking into account a Gaussian distribution of barrier heights.

  18. Ease fabrication of PCR modular chip for portable DNA detection kit

    NASA Astrophysics Data System (ADS)

    Whulanza, Yudan; Aditya, Rifky; Arvialido, Reyhan; Utomo, Muhammad S.; Bachtiar, Boy M.

    2017-02-01

    Engineering a lab-on-a-chip (LoC) to perform the DNA polymerase chain reaction (PCR) for malaria detection is the ultimate goal of this study. This paper investigates the ability to fabricate an LoC kit using conventional method to achieve the lowest production cost by using existing fabrication process. It has been known that majority of LoC was made of polydimethylsiloxane (PDMS) which in this study was realized through a contact mold process. CNC milling process was utilized to create channel features in the range of 150-250 µm on the mold. Characterization on the milling process was done to understand the shrinkage/contraction between mold to product, roughness and also angle of contact of PDMS surface. Ultimately, this paper also includes analysis on flow measurement and heat distribution of an assembled LoC PCR kit. The results show that the achieved dimension of microchannel is 227 µm wide with a roughness of 0.01 µm. The flow measurement indicates a deviation with simulation in the range of 10%. A heat distribution through the kit is achieved following the three temperature zones as desired.

  19. Return Intervals Approach to Financial Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    Financial fluctuations play a key role for financial markets studies. A new approach focusing on properties of return intervals can help to get better understanding of the fluctuations. A return interval is defined as the time between two successive volatilities above a given threshold. We review recent studies and analyze the 1000 most traded stocks in the US stock markets. We find that the distribution of the return intervals has a well approximated scaling over a wide range of thresholds. The scaling is also valid for various time windows from one minute up to one trading day. Moreover, these results are universal for stocks of different countries, commodities, interest rates as well as currencies. Further analysis shows some systematic deviations from a scaling law, which are due to the nonlinear correlations in the volatility sequence. We also examine the memory in return intervals for different time scales, which are related to the long-term correlations in the volatility. Furthermore, we test two popular models, FIGARCH and fractional Brownian motion (fBm). Both models can catch the memory effect but only fBm shows a good scaling in the return interval distribution.

  20. Chip-scale pattern modification method for equalizing residual layer thickness in nanoimprint lithography

    NASA Astrophysics Data System (ADS)

    Youn, Sung-Won; Suzuki, Kenta; Hiroshima, Hiroshi

    2018-06-01

    A software program for modifying a mold design to obtain a uniform residual layer thickness (RLT) distribution has been developed and its validity was verified by UV-nanoimprint lithography (UV-NIL) simulation. First, the effects of granularity (G) on both residual layer uniformity and filling characteristics were characterized. For a constant complementary pattern depth and a granularity that was sufficiently larger than the minimum pattern width, filling time decreased with the decrease in granularity. For a pattern design with a wide density range and an irregular distribution, the choice of a small granularity was not always a good strategy since the etching depth required for a complementary pattern occasionally exceptionally increased with the decrease in granularity. On basis of the results obtained, the automated method was applied to a chip-scale pattern modification. Simulation results showed a marked improvement in residual layer thickness uniformity for a capacity-equalized (CE) mold. For the given conditions, the standard deviation of RLT decreased in the range from 1/3 to 1/5 in accordance with pattern designs.

  1. Debris flow rheology: Experimental analysis of fine-grained slurries

    USGS Publications Warehouse

    Major, Jon J.; Pierson, Thomas C.

    1992-01-01

    The rheology of slurries consisting of ≤2-mm sediment from a natural debris flow deposit was measured using a wide-gap concentric-cylinder viscometer. The influence of sediment concentration and size and distribution of grains on the bulk rheological behavior of the slurries was evaluated at concentrations ranging from 0.44 to 0.66. The slurries exhibit diverse rheological behavior. At shear rates above 5 s−1 the behavior approaches that of a Bingham material; below 5 s−1, sand exerts more influence and slurry behavior deviates from the Bingham idealization. Sand grain interactions dominate the mechanical behavior when sand concentration exceeds 0.2; transient fluctuations in measured torque, time-dependent decay of torque, and hysteresis effects are observed. Grain rubbing, interlocking, and collision cause changes in packing density, particle distribution, grain orientation, and formation and destruction of grain clusters, which may explain the observed behavior. Yield strength and plastic viscosity exhibit order-of-magnitude variation when sediment concentration changes as little as 2–4%. Owing to these complexities, it is unlikely that debris flows can be characterized by a single rheological model.

  2. Beam uniformity of flat top lasers

    NASA Astrophysics Data System (ADS)

    Chang, Chao; Cramer, Larry; Danielson, Don; Norby, James

    2015-03-01

    Many beams that output from standard commercial lasers are multi-mode, with each mode having a different shape and width. They show an overall non-homogeneous energy distribution across the spot size. There may be satellite structures, halos and other deviations from beam uniformity. However, many scientific, industrial and medical applications require flat top spatial energy distribution, high uniformity in the plateau region, and complete absence of hot spots. Reliable standard methods for the evaluation of beam quality are of great importance. Standard methods are required for correct characterization of the laser for its intended application and for tight quality control in laser manufacturing. The International Organization for Standardization (ISO) has published standard procedures and definitions for this purpose. These procedures have not been widely adopted by commercial laser manufacturers. This is due to the fact that they are unreliable because an unrepresentative single-pixel value can seriously distort the result. We hereby propose a metric of beam uniformity, a way of beam profile visualization, procedures to automatically detect hot spots and beam structures, and application examples in our high energy laser production.

  3. Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data

    NASA Astrophysics Data System (ADS)

    Shulenin, V. P.

    2016-10-01

    Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.

  4. Finite-key analysis for measurement-device-independent quantum key distribution.

    PubMed

    Curty, Marcos; Xu, Feihu; Cui, Wei; Lim, Charles Ci Wen; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2014-04-29

    Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach--measurement-device-independent quantum key distribution--has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time frame of signal transmission.

  5. Strain measurement in a concrete beam by use of the Brillouin-scattering-based distributed fiber sensor with single-mode fibers embedded in glass fiber reinforced polymer rods and bonded to steel reinforcing bars.

    PubMed

    Zeng, Xiaodong; Bao, Xiaoyi; Chhoa, Chia Yee; Bremner, Theodore W; Brown, Anthony W; DeMerchant, Michael D; Ferrier, Graham; Kalamkarov, Alexander L; Georgiades, Anastasis V

    2002-08-20

    The strain measurement of a 1.65-m reinforced concrete beam by use of a distributed fiber strain sensor with a 50-cm spatial resolution and 5-cm readout resolution is reported. The strain-measurement accuracy is +/-15 microepsilon (microm/m) according to the system calibration in the laboratory environment with non-uniform-distributed strain and +/-5 microepsilon with uniform strain distribution. The strain distribution has been measured for one-point and two-point loading patterns for optical fibers embedded in pultruded glass fiber reinforced polymer (GFRP) rods and those bonded to steel reinforcing bars. In the one-point loading case, the strain deviations are +/-7 and +/-15 microepsilon for fibers embedded in the GFRP rods and fibers bonded to steel reinforcing bars, respectively, whereas the strain deviation is +/-20 microepsilon for the two-point loading case.

  6. Distribution, microfabric, and geochemical characteristics of siliceous rocks in central orogenic belt, China: implications for a hydrothermal sedimentation model.

    PubMed

    Li, Hongzhong; Zhai, Mingguo; Zhang, Lianchang; Gao, Le; Yang, Zhijun; Zhou, Yongzhang; He, Junguo; Liang, Jin; Zhou, Liuyu; Voudouris, Panagiotis Ch

    2014-01-01

    Marine siliceous rocks are widely distributed in the central orogenic belt (COB) of China and have a close connection to the geological evolution and metallogenesis. They display periodic distributions from Mesoproterozoic to Jurassic with positive peaks in the Mesoproterozoic, Cambrian--Ordovician, and Carboniferous--Permian and their deposition is enhanced by the tensional geological settings. The compressional regimes during the Jinning, Caledonian, Hercynian, Indosinian, and Yanshanian orogenies resulted in sudden descent in their distribution. The siliceous rocks of the Bafangshan-Erlihe ore deposit include authigenic quartz, syn-depositional metal sulphides, and scattered carbonate minerals. Their SiO2 content (71.08-95.30%), Ba (42.45-503.0 ppm), and ΣREE (3.28-19.75 ppm) suggest a hydrothermal sedimentation origin. As evidenced by the Al/(Al + Fe + Mn), Sc/Th, (La/Yb) N, and (La/Ce) N ratios and δCe values, the studied siliceous rocks were deposited in a marginal sea basin of a limited ocean. We suggest that the Bafangshan-Erlihe area experienced high- and low-temperature stages of hydrothermal activities. The hydrothermal sediments of the former stage include metal sulphides and silica, while the latter was mainly composed of silica. Despite the hydrothermal sedimentation of the siliceous rocks, minor terrigenous input, magmatism, and biological activity partly contributed to geochemical features deviating from the typical hydrothermal characteristics.

  7. Distribution, Microfabric, and Geochemical Characteristics of Siliceous Rocks in Central Orogenic Belt, China: Implications for a Hydrothermal Sedimentation Model

    PubMed Central

    Li, Hongzhong; Zhai, Mingguo; Zhang, Lianchang; Gao, Le; Yang, Zhijun; Zhou, Yongzhang; He, Junguo; Liang, Jin; Zhou, Liuyu; Voudouris, Panagiotis Ch.

    2014-01-01

    Marine siliceous rocks are widely distributed in the central orogenic belt (COB) of China and have a close connection to the geological evolution and metallogenesis. They display periodic distributions from Mesoproterozoic to Jurassic with positive peaks in the Mesoproterozoic, Cambrian—Ordovician, and Carboniferous—Permian and their deposition is enhanced by the tensional geological settings. The compressional regimes during the Jinning, Caledonian, Hercynian, Indosinian, and Yanshanian orogenies resulted in sudden descent in their distribution. The siliceous rocks of the Bafangshan-Erlihe ore deposit include authigenic quartz, syn-depositional metal sulphides, and scattered carbonate minerals. Their SiO2 content (71.08–95.30%), Ba (42.45–503.0 ppm), and ΣREE (3.28–19.75 ppm) suggest a hydrothermal sedimentation origin. As evidenced by the Al/(Al + Fe + Mn), Sc/Th, (La/Yb)N, and (La/Ce)N ratios and δCe values, the studied siliceous rocks were deposited in a marginal sea basin of a limited ocean. We suggest that the Bafangshan-Erlihe area experienced high- and low-temperature stages of hydrothermal activities. The hydrothermal sediments of the former stage include metal sulphides and silica, while the latter was mainly composed of silica. Despite the hydrothermal sedimentation of the siliceous rocks, minor terrigenous input, magmatism, and biological activity partly contributed to geochemical features deviating from the typical hydrothermal characteristics. PMID:25140349

  8. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    PubMed

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  9. A Distribution-class Locational Marginal Price (DLMP) Index for Enhanced Distribution Systems

    NASA Astrophysics Data System (ADS)

    Akinbode, Oluwaseyi Wemimo

    The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog of the transmission LMP (DLMP) as an enabler of the advanced applications of the enhanced distribution system. The DLMP is envisioned as a control signal that can incentivize distribution system resources to behave optimally in a manner that benefits economic efficiency and system reliability and that can optimally couple the transmission and the distribution systems. The DLMP is calculated from a two-stage optimization problem; a transmission system OPF and a distribution system OPF. An iterative framework that ensures accurate representation of the distribution system's price sensitive resources for the transmission system problem and vice versa is developed and its convergence problem is discussed. As part of the DLMP calculation framework, a DCOPF formulation that endogenously captures the effect of real power losses is discussed. The formulation uses piecewise linear functions to approximate losses. This thesis explores, with theoretical proofs, the breakdown of the loss approximation technique when non-positive DLMPs/LMPs occur and discusses a mixed integer linear programming formulation that corrects the breakdown. The DLMP is numerically illustrated in traditional and enhanced distribution systems and its superiority to contemporary pricing mechanisms is demonstrated using price responsive loads. Results show that the impact of the inaccuracy of contemporary pricing schemes becomes significant as flexible resources increase. At high elasticity, aggregate load consumption deviated from the optimal consumption by up to about 45 percent when using a flat or time-of-use rate. Individual load consumption deviated by up to 25 percent when using a real-time price. The superiority of the DLMP is more pronounced when important distribution network conditions are not reflected by contemporary prices. The individual load consumption incentivized by the real-time price deviated by up to 90 percent from the optimal consumption in a congested distribution network. While the DLMP internalizes congestion management, the consumption incentivized by the real-time price caused overloads.

  10. Mass balance, meteorology, area altitude distribution, glacier-surface altitude, ice motion, terminus position, and runoff at Gulkana Glacier, Alaska, 1996 balance year

    USGS Publications Warehouse

    March, Rod S.

    2003-01-01

    The 1996 measured winter snow, maximum winter snow, net, and annual balances in the Gulkana Glacier Basin were evaluated on the basis of meteorological, hydrological, and glaciological data. Averaged over the glacier, the measured winter snow balance was 0.87 meter on April 18, 1996, 1.1 standard deviation below the long-term average; the maximum winter snow balance, 1.06 meters, was reached on May 28, 1996; and the net balance (from August 30, 1995, to August 24, 1996) was -0.53 meter, 0.53 standard deviation below the long-term average. The annual balance (October 1, 1995, to September 30, 1996) was -0.37 meter. Area-averaged balances were reported using both the 1967 and 1993 area altitude distributions (the numbers previously given in this abstract use the 1993 area altitude distribution). Net balance was about 25 percent less negative using the 1993 area altitude distribution than the 1967 distribution. Annual average air temperature was 0.9 degree Celsius warmer than that recorded with the analog sensor used since 1966. Total precipitation catch for the year was 0.78 meter, 0.8 standard deviations below normal. The annual average wind speed was 3.5 meters per second in the first year of measuring wind speed. Annual runoff averaged 1.50 meters over the basin, 1.0 standard deviation below the long-term average. Glacier-surface altitude and ice-motion changes measured at three index sites document seasonal ice-speed and glacier-thickness changes. Both showed a continuation of a slowing and thinning trend present in the 1990s. The glacier terminus and lower ablation area were defined for 1996 with a handheld Global Positioning System survey of 126 locations spread out over about 4 kilometers on the lower glacier margin. From 1949 to 1996, the terminus retreated about 1,650 meters for an average retreat rate of 35 meters per year.

  11. Propagation of rotational Risley-prism-array-based Gaussian beams in turbulent atmosphere

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Ma, Haotong; Dong, Li; Ren, Ge; Qi, Bo; Tan, Yufeng

    2018-03-01

    Limited by the size and weight of prism and optical assembling, Rotational Risley-prism-array system is a simple but effective way to realize high power and superior beam quality of deflecting laser output. In this paper, the propagation of the rotational Risley-prism-array-based Gaussian beam array in atmospheric turbulence is studied in detail. An analytical expression for the average intensity distribution at the receiving plane is derived based on nonparaxial ray tracing method and extended Huygens-Fresnel principle. Power in the diffraction-limited bucket is chosen to evaluate beam quality. The effect of deviation angle, propagation distance and intensity of turbulence on beam quality is studied in detail by quantitative simulation. It reveals that with the propagation distance increasing, the intensity distribution gradually evolves from multiple-petal-like shape into the pattern that contains one main-lobe in the center with multiple side-lobes in weak turbulence. The beam quality of rotational Risley-prism-array-based Gaussian beam array with lower deviation angle is better than its counterpart with higher deviation angle when propagating in weak and medium turbulent (i.e. Cn2 < 10-13m-2/3), the beam quality of higher deviation angle arrays degrades faster as the intensity of turbulence gets stronger. In the case of propagating in strong turbulence, the long propagation distance (i.e. z > 10km ) and deviation angle have no influence on beam quality.

  12. Analysis of measurement deviations for the patient-specific quality assurance using intensity-modulated spot-scanning particle beams

    NASA Astrophysics Data System (ADS)

    Li, Yongqiang; Hsi, Wen C.

    2017-04-01

    To analyze measurement deviations of patient-specific quality assurance (QA) using intensity-modulated spot-scanning particle beams, a commercial radiation dosimeter using 24 pinpoint ionization chambers was utilized. Before the clinical trial, validations of the radiation dosimeter and treatment planning system were conducted. During the clinical trial 165 measurements were performed on 36 enrolled patients. Two or three fields of particle beam were used for each patient. Measurements were typically performed with the dosimeter placed at special regions of dose distribution along depth and lateral profiles. In order to investigate the dosimeter accuracy, repeated measurements with uniform dose irradiations were also carried out. A two-step approach was proposed to analyze 24 sampling points over a 3D treatment volume. The mean value and the standard deviation of each measurement did not exceed 5% for all measurements performed on patients with various diseases. According to the defined intervention thresholds of mean deviation and the distance-to-agreement concept with a Gamma index analysis using criteria of 3.0% and 2 mm, a decision could be made regarding whether the dose distribution was acceptable for the patient. Based measurement results, deviation analysis was carried out. In this study, the dosimeter was used for dose verification and provided a safety guard to assure precise dose delivery of highly modulated particle therapy. Patient-specific QA will be investigated in future clinical operations.

  13. [Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].

    PubMed

    Zhu, Chun; Zhang, Xu

    2010-10-01

    Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.

  14. Inference and analysis of xenon outflow curves under multi-pulse injection in two-dimensional chromatography.

    PubMed

    Shu-Jiang, Liu; Zhan-Ying, Chen; Yin-Zhong, Chang; Shi-Lian, Wang; Qi, Li; Yuan-Qing, Fan

    2013-10-11

    Multidimensional gas chromatography is widely applied to atmospheric xenon monitoring for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). To improve the capability for xenon sampling from the atmosphere, sampling techniques have been investigated in detail. The sampling techniques are designed by xenon outflow curves which are influenced by many factors, and the injecting condition is one of the key factors that could influence the xenon outflow curves. In this paper, the xenon outflow curves of single-pulse injection in two-dimensional gas chromatography has been tested and fitted as a function of exponential modified Gaussian distribution. An inference formula of the xenon outflow curve for six-pulse injection is derived, and the inference formula is also tested to compare with its fitting formula of the xenon outflow curve. As a result, the curves of both the one-pulse and six-pulse injections obey the exponential modified Gaussian distribution when the temperature of the activated carbon column's temperature is 26°C and the flow rate of the carrier gas is 35.6mLmin(-1). The retention time of the xenon peak for one-pulse injection is 215min, and the peak width is 138min. For the six-pulse injection, however, the retention time is delayed to 255min, and the peak width broadens to 222min. According to the inferred formula of the xenon outflow curve for the six-pulse injection, the inferred retention time is 243min, the relative deviation of the retention time is 4.7%, and the inferred peak width is 225min, with a relative deviation of 1.3%. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Adjusting head circumference for covariates in autism: clinical correlates of a highly heritable continuous trait.

    PubMed

    Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J; Murtha, Michael T; Hus, Vanessa; Lowe, Jennifer K; Willsey, A Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E; Ledbetter, David H; Lord, Catherine; Mane, Shrikant M; Lese Martin, Christa; Martin, Donna M; Morrow, Eric M; Walsh, Christopher A; Sutcliffe, James S; State, Matthew W; Devlin, Bernie; Cook, Edwin H; Kim, Soo-Jeong

    2013-10-15

    Brain development follows a different trajectory in children with autism spectrum disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. Gender, age, height, weight, genetic ancestry, and ASD status were significant predictors of HC (estimate of the ASD effect = .2 cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait, and population norms for HC would be far more accurate if covariates including genetic ancestry, height, and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. © 2013 Society of Biological Psychiatry.

  16. Adjusting head circumference for covariates in autism: clinical correlates of a highly heritable continuous trait

    PubMed Central

    Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J.; Murtha, Michael T.; Hus, Vanessa; Lowe, Jennifer K.; Willsey, A. Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W.; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E.; Ledbetter, David H.; Lord, Catherine; Mane, Shrikant M.; Martin, Christa Lese; Martin, Donna M.; Morrow, Eric M.; Walsh, Christopher A.; Sutcliffe, James S.; State, Matthew W.; Devlin, Bernie; Cook, Edwin H.; Kim, Soo-Jeong

    2013-01-01

    BACKGROUND Brain development follows a different trajectory in children with Autism Spectrum Disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. METHODS We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. RESULTS Gender, age, height, weight, genetic ancestry and ASD status were significant predictors of HC (estimate of the ASD effect=0.2cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. CONCLUSIONS Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait and population norms for HC would be far more accurate if covariates including genetic ancestry, height and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. PMID:23746936

  17. Expected distributions of root-mean-square positional deviations in proteins.

    PubMed

    Pitera, Jed W

    2014-06-19

    The atom positional root-mean-square deviation (RMSD) is a standard tool for comparing the similarity of two molecular structures. It is used to characterize the quality of biomolecular simulations, to cluster conformations, and as a reaction coordinate for conformational changes. This work presents an approximate analytic form for the expected distribution of RMSD values for a protein or polymer fluctuating about a stable native structure. The mean and maximum of the expected distribution are independent of chain length for long chains and linearly proportional to the average atom positional root-mean-square fluctuations (RMSF). To approximate the RMSD distribution for random-coil or unfolded ensembles, numerical distributions of RMSD were generated for ensembles of self-avoiding and non-self-avoiding random walks. In both cases, for all reference structures tested for chains more than three monomers long, the distributions have a maximum distant from the origin with a power-law dependence on chain length. The purely entropic nature of this result implies that care must be taken when interpreting stable high-RMSD regions of the free-energy landscape as "intermediates" or well-defined stable states.

  18. Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes

    NASA Astrophysics Data System (ADS)

    Białek, Małgorzata

    2006-03-01

    Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.

  19. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories

    NASA Astrophysics Data System (ADS)

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  20. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories.

    PubMed

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  1. Acoustic response variability in automotive vehicles

    NASA Astrophysics Data System (ADS)

    Hills, E.; Mace, B. R.; Ferguson, N. S.

    2009-03-01

    A statistical analysis of a series of measurements of the audio-frequency response of a large set of automotive vehicles is presented: a small hatchback model with both a three-door (411 vehicles) and five-door (403 vehicles) derivative and a mid-sized family five-door car (316 vehicles). The sets included vehicles of various specifications, engines, gearboxes, interior trim, wheels and tyres. The tests were performed in a hemianechoic chamber with the temperature and humidity recorded. Two tests were performed on each vehicle and the interior cabin noise measured. In the first, the excitation was acoustically induced by sets of external loudspeakers. In the second test, predominantly structure-borne noise was induced by running the vehicle at a steady speed on a rough roller. For both types of excitation, it is seen that the effects of temperature are small, indicating that manufacturing variability is larger than that due to temperature for the tests conducted. It is also observed that there are no significant outlying vehicles, i.e. there are at most only a few vehicles that consistently have the lowest or highest noise levels over the whole spectrum. For the acoustically excited tests, measured 1/3-octave noise reduction levels typically have a spread of 5 dB or so and the normalised standard deviation of the linear data is typically 0.1 or higher. Regarding the statistical distribution of the linear data, a lognormal distribution is a somewhat better fit than a Gaussian distribution for lower 1/3-octave bands, while the reverse is true at higher frequencies. For the distribution of the overall linear levels, a Gaussian distribution is generally the most representative. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the acoustically induced airborne cabin noise is best described by a Gaussian distribution with a normalised standard deviation between 0.09 and 0.145. There is generally considerable variability in the roller-induced noise, with individual 1/3-octave levels varying by typically 15 dB or so and with the normalised standard deviation being in the range 0.2-0.35 or more. These levels are strongly affected by wheel rim and tyre constructions. For vehicles with nominally identical wheel rims and tyres, the normalised standard deviation for 1/3-octave levels in the frequency range 40-600 Hz is 0.2 or so. The distribution of the linear roller-induced noise level in each 1/3-octave frequency band is well described by a lognormal distribution as is the overall level. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the roller-induced road noise is best described by a lognormal distribution with a normalised standard deviation of 0.2 or so, but that this can be significantly affected by the tyre and rim type, especially at lower frequencies.

  2. Characterizations of particle size distribution of the droplets exhaled by sneeze

    PubMed Central

    Han, Z. Y.; Weng, W. G.; Huang, Q. Y.

    2013-01-01

    This work focuses on the size distribution of sneeze droplets exhaled immediately at mouth. Twenty healthy subjects participated in the experiment and 44 sneezes were measured by using a laser particle size analyser. Two types of distributions are observed: unimodal and bimodal. For each sneeze, the droplets exhaled at different time in the sneeze duration have the same distribution characteristics with good time stability. The volume-based size distributions of sneeze droplets can be represented by a lognormal distribution function, and the relationship between the distribution parameters and the physiological characteristics of the subjects are studied by using linear regression analysis. The geometric mean of the droplet size of all the subjects is 360.1 µm for unimodal distribution and 74.4 µm for bimodal distribution with geometric standard deviations of 1.5 and 1.7, respectively. For the two peaks of the bimodal distribution, the geometric mean (the geometric standard deviation) is 386.2 µm (1.8) for peak 1 and 72.0 µm (1.5) for peak 2. The influences of the measurement method, the limitations of the instrument, the evaporation effects of the droplets, the differences of biological dynamic mechanism and characteristics between sneeze and other respiratory activities are also discussed. PMID:24026469

  3. Multicomponent plasma expansion into vacuum with non-Maxwellian electrons

    NASA Astrophysics Data System (ADS)

    Elkamash, Ibrahem; Kourakis, Ioannis

    2016-10-01

    The expansion of a collisionless plasma into vacuum has been widely studied since the early works of Gurevich et al and Allen and coworkers. It has received momentum in recent years, in particular in the context of ultraintense laser pulse interaction with a solid target, in an effort to elucidate the generation of high energy ion beams. In most present day experiments, laser produced plasmas contain several ion species, due to increasingly complicated composite targets. Anderson et al have studied the isothermal expansion of a two-ion-species plasma. As in most earlier works, the electrons were assumed to be isothermal throughout the expansion. However, in more realistic situations, the evolution of laser produced plasmas into vacuum is mainly governed by nonthermal electrons. These electrons are characterized by particle distribution functions with high energy tails, which may significantly deviate from the Maxwellian distribution. In this paper, we present a theoretical model for plasma expansion of two component plasma with nonthermal electrons, modelled by a kappa-type distribution. The superthermal effect on the ion density, velocity and the electric field is investigated. It is shown that energetic electrons have a significant effecton the expansion dynamics of the plasma. This work was supported from CPP/QUB funding. One of us (I.S. Elkamash) acknowledges financial support by an Egyptian Government fellowship.

  4. An Examination of Diameter Density Prediction with k-NN and Airborne Lidar

    DOE PAGES

    Strunk, Jacob L.; Gould, Peter J.; Packalen, Petteri; ...

    2017-11-16

    While lidar-based forest inventory methods have been widely demonstrated, performances of methods to predict tree diameters with airborne lidar (lidar) are not well understood. One cause for this is that the performance metrics typically used in studies for prediction of diameters can be difficult to interpret, and may not support comparative inferences between sampling designs and study areas. To help with this problem we propose two indices and use them to evaluate a variety of lidar and k nearest neighbor (k-NN) strategies for prediction of tree diameter distributions. The indices are based on the coefficient of determination ( R 2),more » and root mean square deviation (RMSD). Both of the indices are highly interpretable, and the RMSD-based index facilitates comparisons with alternative (non-lidar) inventory strategies, and with projects in other regions. K-NN diameter distribution prediction strategies were examined using auxiliary lidar for 190 training plots distribute across the 800 km 2 Savannah River Site in South Carolina, USA. In conclusion, we evaluate the performance of k-NN with respect to distance metrics, number of neighbors, predictor sets, and response sets. K-NN and lidar explained 80% of variability in diameters, and Mahalanobis distance with k = 3 neighbors performed best according to a number of criteria.« less

  5. The Schiff angular bremsstrahlung distribution from composite media

    NASA Astrophysics Data System (ADS)

    Taylor, M. L.; Dalton, B.; Franich, R. D.

    2012-12-01

    The Schiff differential for the angular distribution of bremsstrahlung is widely employed, but calculations involving composite materials (i.e. compounds and mixtures) are often undertaken in a somewhat ad hoc fashion. In this work, we suggest an alternative approach to power-law estimates of the effective atomic number utilising Seltzer and Berger's combined approach in order to generate single-valued effective atomic numbers applicable over a large energy range (in the worst case deviation from constancy of about 2% between 10 keV and 1 GeV). Differences with power-law estimates of Z for composites are potentially significant, particularly for low-Z media such as biological or surrogate materials as relevant within the context of medical physics. As an example, soft tissue differs by >70% and cortical bone differs by >85%, while for high-Z composites such as a tungsten-rhenium alloy the difference is of the order of 1%. Use of the normalised Schiff formula for shape only does not exhibit strong Z dependence. Consequently, in such contexts the differences are negligible - the power-law approach overestimates the magnitude by 1.05% in the case of water and underestimates it by <0.1% for the high-Z alloys. The differences in the distribution are most pronounced for small angles and where the bremsstrahlung quanta are low energy.

  6. An Examination of Diameter Density Prediction with k-NN and Airborne Lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strunk, Jacob L.; Gould, Peter J.; Packalen, Petteri

    While lidar-based forest inventory methods have been widely demonstrated, performances of methods to predict tree diameters with airborne lidar (lidar) are not well understood. One cause for this is that the performance metrics typically used in studies for prediction of diameters can be difficult to interpret, and may not support comparative inferences between sampling designs and study areas. To help with this problem we propose two indices and use them to evaluate a variety of lidar and k nearest neighbor (k-NN) strategies for prediction of tree diameter distributions. The indices are based on the coefficient of determination ( R 2),more » and root mean square deviation (RMSD). Both of the indices are highly interpretable, and the RMSD-based index facilitates comparisons with alternative (non-lidar) inventory strategies, and with projects in other regions. K-NN diameter distribution prediction strategies were examined using auxiliary lidar for 190 training plots distribute across the 800 km 2 Savannah River Site in South Carolina, USA. In conclusion, we evaluate the performance of k-NN with respect to distance metrics, number of neighbors, predictor sets, and response sets. K-NN and lidar explained 80% of variability in diameters, and Mahalanobis distance with k = 3 neighbors performed best according to a number of criteria.« less

  7. Active photonic lattices: is greater than blackbody intensity possible?

    DOE PAGES

    Chow, W. W.; Waldmueller, I.

    2006-11-10

    In this paper, the emission from a radiating source embedded in a photonic lattice is investigated. The photonic lattice spectrum was found to deviate from the blackbody distribution, with intracavity emission suppressed at certain frequencies and significantly enhanced at others. For rapid population relaxation, where the photonic lattice and blackbody populations are described by the same thermal distribution, it was found that the enhancement does not result in output intensities exceeding those of the blackbody. Finally, however, for slow population relaxation, the photonic lattice population has a greater tendency to deviate from thermal equilibrium, resulting in output intensities exceeding thosemore » of the blackbody.« less

  8. Pulse height response of an optical particle counter to monodisperse aerosols

    NASA Technical Reports Server (NTRS)

    Wilmoth, R. G.; Grice, S. S.; Cuda, V.

    1976-01-01

    The pulse height response of a right angle scattering optical particle counter has been investigated using monodisperse aerosols of polystyrene latex spheres, di-octyl phthalate and methylene blue. The results confirm previous measurements for the variation of mean pulse height as a function of particle diameter and show good agreement with the relative response predicted by Mie scattering theory. Measured cumulative pulse height distributions were found to fit reasonably well to a log normal distribution with a minimum geometric standard deviation of about 1.4 for particle diameters greater than about 2 micrometers. The geometric standard deviation was found to increase significantly with decreasing particle diameter.

  9. Statistics of velocity fluctuations of Geldart A particles in a circulating fluidized bed riser

    DOE PAGES

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    2017-11-21

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  10. Genome-wide scan of 29,141 African Americans finds no evidence of directional selection since admixture.

    PubMed

    Bhatia, Gaurav; Tandon, Arti; Patterson, Nick; Aldrich, Melinda C; Ambrosone, Christine B; Amos, Christopher; Bandera, Elisa V; Berndt, Sonja I; Bernstein, Leslie; Blot, William J; Bock, Cathryn H; Caporaso, Neil; Casey, Graham; Deming, Sandra L; Diver, W Ryan; Gapstur, Susan M; Gillanders, Elizabeth M; Harris, Curtis C; Henderson, Brian E; Ingles, Sue A; Isaacs, William; De Jager, Phillip L; John, Esther M; Kittles, Rick A; Larkin, Emma; McNeill, Lorna H; Millikan, Robert C; Murphy, Adam; Neslund-Dudas, Christine; Nyante, Sarah; Press, Michael F; Rodriguez-Gil, Jorge L; Rybicki, Benjamin A; Schwartz, Ann G; Signorello, Lisa B; Spitz, Margaret; Strom, Sara S; Tucker, Margaret A; Wiencke, John K; Witte, John S; Wu, Xifeng; Yamamura, Yuko; Zanetti, Krista A; Zheng, Wei; Ziegler, Regina G; Chanock, Stephen J; Haiman, Christopher A; Reich, David; Price, Alkes L

    2014-10-02

    The extent of recent selection in admixed populations is currently an unresolved question. We scanned the genomes of 29,141 African Americans and failed to find any genome-wide-significant deviations in local ancestry, indicating no evidence of selection influencing ancestry after admixture. A recent analysis of data from 1,890 African Americans reported that there was evidence of selection in African Americans after their ancestors left Africa, both before and after admixture. Selection after admixture was reported on the basis of deviations in local ancestry, and selection before admixture was reported on the basis of allele-frequency differences between African Americans and African populations. The local-ancestry deviations reported by the previous study did not replicate in our very large sample, and we show that such deviations were expected purely by chance, given the number of hypotheses tested. We further show that the previous study's conclusion of selection in African Americans before admixture is also subject to doubt. This is because the FST statistics they used were inflated and because true signals of unusual allele-frequency differences between African Americans and African populations would be best explained by selection that occurred in Africa prior to migration to the Americas. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  11. Development of microsatellite markers in Caryophyllaeus laticeps (Cestoda: Caryophyllidea), monozoic fish tapeworm, using next-generation sequencing approach.

    PubMed

    Králová-Hromadová, Ivica; Minárik, Gabriel; Bazsalovicsová, Eva; Mikulíček, Peter; Oravcová, Alexandra; Pálková, Lenka; Hanzelová, Vladimíra

    2015-02-01

    Caryophyllaeus laticeps (Pallas 1781) (Cestoda: Caryophyllidea) is a monozoic tapeworm of cyprinid fishes with a distribution area that includes Europe, most of the Palaearctic Asia and northern Africa. Broad geographic distribution, wide range of definitive fish hosts and recently revealed high morphological plasticity of the parasite, which is not in an agreement with molecular findings, make this species to be an interesting model for population biology studies. Microsatellites (short tandem repeat (STR) markers), as predominant markers for population genetics, were designed for C. laticeps using a next-generation sequencing (NGS) approach. Out of 165 marker candidates, 61 yielded PCR products of the expected size and in 25 of the candidates a declared repetitive motif was confirmed by Sanger sequencing. After the fragment analysis, six loci were proved to be polymorphic and tested for heterozygosity, Hardy-Weinberg equilibrium and the presence of null alleles on 59 individuals coming from three geographically widely separated populations (Slovakia, Russia and UK). The number of alleles in particular loci and populations ranged from two to five. Significant deficit of heterozygotes and the presence of null alleles were found in one locus in all three populations. Other loci showed deviations from Hardy-Weinberg equilibrium and the presence of null alleles only in some populations. In spite of relatively low polymorphism and the potential presence of null alleles, newly developed microsatellites may be applied as suitable markers in population genetic studies of C. laticeps.

  12. Syringe needle-based sampling coupled with liquid-phase extraction for determination of the three-dimensional distribution of l-ascorbic acid in apples.

    PubMed

    Tang, Sheng; Lee, Hian Kee

    2016-05-15

    A novel syringe needle-based sampling approach coupled with liquid-phase extraction (NBS-LPE) was developed and applied to the extraction of l-ascorbic acid (AsA) in apple. In NBS-LPE, only a small amount of apple flesh (ca. 10mg) was sampled directly using a syringe needle and placed in a glass insert for liquid extraction of AsA by 80 μL oxalic acid-acetic acid. The extract was then directly analyzed by liquid chromatography. This new procedure is simple, convenient, almost organic solvent free, and causes far less damage to the fruit. To demonstrate the applicability of NBS-LPE, AsA levels at different sampling points in a single apple were determined to reveal the spatial distribution of the analyte in a three-dimensional model. The results also showed that this method had good sensitivity (limit of detection of 0.0097 mg/100g; limit of quantification of 0.0323 mg/100g), acceptable reproducibility (relative standard deviation of 5.01% (n=6)), a wide linear range of between 0.05 and 50mg/100g, and good linearity (r(2)=0.9921). This interesting extraction technique and modeling approach can be used to measure and monitor a wide range of compounds in various parts of different soft-matrix fruits and vegetables, including single specimens. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Methods for Linking Item Parameters.

    DTIC Science & Technology

    1981-08-01

    within and across data sets; all proportion-correct distributions were quite platykurtic . Biserial item-total correlations had relatively consistent...would produce a distribution of a parameters which had a larger mean and standard deviation, was more positively skewed, and was somewhat more platykurtic

  14. Longitudinal and cross-sectional analyses of visual field progression in participants of the Ocular Hypertension Treatment Study.

    PubMed

    Artes, Paul H; Chauhan, Balwantray C; Keltner, John L; Cello, Kim E; Johnson, Chris A; Anderson, Douglas R; Gordon, Mae O; Kass, Michael A

    2010-12-01

    To assess agreement between longitudinal and cross-sectional analyses for determining visual field progression in data from the Ocular Hypertension Treatment Study. Visual field data from 3088 eyes of 1570 participants (median follow-up, 7 years) were analyzed. Longitudinal analyses were performed using change probability with total and pattern deviation, and cross-sectional analyses were performed using the glaucoma hemifield test, corrected pattern standard deviation, and mean deviation. The rates of mean deviation and general height change were compared to estimate the degree of diffuse loss in emerging glaucoma. Agreement on progression in longitudinal and cross-sectional analyses ranged from 50% to 61% and remained nearly constant across a wide range of criteria. In contrast, agreement on absence of progression ranged from 97.0% to 99.7%, being highest for the stricter criteria. Analyses of pattern deviation were more conservative than analyses of total deviation, with a 3 to 5 times lesser incidence of progression. Most participants developing field loss had both diffuse and focal changes. Despite considerable overall agreement, 40% to 50% of eyes identified as having progressed with either longitudinal or cross-sectional analyses were identified with only one of the analyses. Because diffuse change is part of early glaucomatous damage, pattern deviation analyses may underestimate progression in patients with ocular hypertension.

  15. Distribution of the near-earth objects

    NASA Astrophysics Data System (ADS)

    Emel'Yanenko, V. V.; Naroenkov, S. A.; Shustov, B. M.

    2011-12-01

    This paper analyzes the distribution of the orbits of near-Earth minor bodies from the data on more than 7500 objects. The distribution of large near-Earth objects (NEOs) with absolute magnitudes of H < 18 is generally consistent with the earlier predictions (Bottke et al., 2002; Stuart, 2003), although we have revealed a previously undetected maximum in the distribution of perihelion distances q near q = 0.5 AU. The study of the orbital distribution for the entire sample of all detected objects has found new significant features. In particular, the distribution of perihelion longitudes seriously deviates from a homogeneous pattern; its variations are roughly 40% of its mean value. These deviations cannot be stochastic, which is confirmed by the Kolmogorov-Smirnov test with a more than 0.9999 probability. These features can be explained by the dynamic behavior of the minor bodies related to secular resonances with Jupiter. For the objects with H < 18, the variations in the perihelion longitude distribution are not so apparent. By extrapolating the orbital characteristics of the NEOs with H < 18, we have obtained longitudinal, latitudinal, and radial distributions of potentially hazardous objects in a heliocentric ecliptic coordinate frame. The differences in the orbital distributions of objects of different size appear not to be a consequence of observational selection, but could indicate different sources of the NEOs.

  16. Parametric Blade Study Test Report Rotor Configuration. Number 2

    DTIC Science & Technology

    1988-11-01

    Incidence Angle (100% N) .............. 51 9 Rotor Relative Inlet Mach Number (100% N) ... 51 1G Rotor Loss Coefficient (100% N) ............. 52 11 Rotor...Diffusion Factor (100% N) ............. 52 12 Rotor Deviation Angle (100% N) .............. 53 13 Stator Incidence Angle (100% N) ............. 53 14...78 50 Stator Deviation Angle (90% N) .............. 79 51 Stator Loss Coefficient (90% N) ............. 79 52 Static Pressure Distribution

  17. Using Heteroskedastic Ordered Probit Models to Recover Moments of Continuous Test Score Distributions from Coarsened Data

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Shear, Benjamin R.; Castellano, Katherine E.; Ho, Andrew D.

    2017-01-01

    Test score distributions of schools or demographic groups are often summarized by frequencies of students scoring in a small number of ordered proficiency categories. We show that heteroskedastic ordered probit (HETOP) models can be used to estimate means and standard deviations of multiple groups' test score distributions from such data. Because…

  18. Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.

    ERIC Educational Resources Information Center

    Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas

    2002-01-01

    Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…

  19. Determining the best population-level alcohol consumption model and its impact on estimates of alcohol-attributable harms

    PubMed Central

    2012-01-01

    Background The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs), and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. Methods To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS) and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. Results The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293) (R2 = 0.9207) for women and 1.171 (95% CI: 1.144 to 1.197) (R2 = 0. 9474) for men. Conclusions Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol consumption from population surveys due to its fit, flexibility, and the ease with which it can be modified. The results showed that a large degree of variance of the standard deviation of the alcohol consumption Gamma distribution was explained by the mean alcohol consumption, allowing for alcohol consumption to be modeled through a Gamma distribution using only average consumption. PMID:22490226

  20. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    PubMed Central

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  1. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  2. Electroosmotic flow analysis of a branched U-turn nanofluidic device.

    PubMed

    Parikesit, Gea O F; Markesteijn, Anton P; Kutchoukov, Vladimir G; Piciu, Oana; Bossche, Andre; Westerweel, Jerry; Garini, Yuval; Young, Ian T

    2005-10-01

    In this paper, we present the analysis of electroosmotic flow in a branched -turn nanofluidic device, which we developed for detection and sorting of single molecules. The device, where the channel depth is only 150 nm, is designed to optically detect fluorescence from a volume as small as 270 attolitres (al) with a common wide-field fluorescent setup. We use distilled water as the liquid, in which we dilute 110 nm fluorescent beads employed as tracer-particles. Quantitative imaging is used to characterize the pathlines and velocity distribution of the electroosmotic flow in the device. Due to the device's complex geometry, the electroosmotic flow cannot be solved analytically. Therefore we use numerical flow simulation to model our device. Our results show that the deviation between measured and simulated data can be explained by the measured Brownian motion of the tracer-particles, which was not incorporated in the simulation.

  3. DFT based vibrational spectroscopic investigations and biological activity of toxic material monocrotophos

    NASA Astrophysics Data System (ADS)

    Nimmi, D. E.; Sam, S. P. Chandhini; Praveen, S. G.; Binoy, J.

    2018-05-01

    Many organophosphate compounds exhibiting toxicity are widely used as pesticides and insecticides whose structural features can be explained excellently using geometric simulation using density functional theory and vibrational spectrum. In this work, the molecular structural parameters and vibrational frequencies of the fundamental modes of Monocrotophoshave been obtained using Density functional theory (DFT), using B3LYP functional with 6-311++G(d, p) basis sets and the detailed vibrational analysis of FT-IR and FT-Ramanspectral bands have been carried out using potential energy distribution (PED). The deviation from the resonance structure of phosphate group due to `bridging of oxygen' and π-resonance of amides has been investigated based on the spectral and geometric data. The molecular docking simulation of Monocrotophos with BSA and DNA has been performed to find the mode of binding and the interactions with BSA has been investigated with UV-Visible spectroscopic method, to assess the strength of binding.

  4. Extreme reaction times determine fluctuation scaling in human color vision

    NASA Astrophysics Data System (ADS)

    Medina, José M.; Díaz, José A.

    2016-11-01

    In modern mental chronometry, human reaction time defines the time elapsed from stimulus presentation until a response occurs and represents a reference paradigm for investigating stochastic latency mechanisms in color vision. Here we examine the statistical properties of extreme reaction times and whether they support fluctuation scaling in the skewness-kurtosis plane. Reaction times were measured for visual stimuli across the cardinal directions of the color space. For all subjects, the results show that very large reaction times deviate from the right tail of reaction time distributions suggesting the existence of dragon-kings events. The results also indicate that extreme reaction times are correlated and shape fluctuation scaling over a wide range of stimulus conditions. The scaling exponent was higher for achromatic than isoluminant stimuli, suggesting distinct generative mechanisms. Our findings open a new perspective for studying failure modes in sensory-motor communications and in complex networks.

  5. Combined multifrequency EPR and DFT study of dangling bonds in a-Si:H

    NASA Astrophysics Data System (ADS)

    Fehr, M.; Schnegg, A.; Rech, B.; Lips, K.; Astakhov, O.; Finger, F.; Pfanner, G.; Freysoldt, C.; Neugebauer, J.; Bittl, R.; Teutloff, C.

    2011-12-01

    Multifrequency pulsed electron paramagnetic resonance (EPR) spectroscopy using S-, X-, Q-, and W-band frequencies (3.6, 9.7, 34, and 94 GHz, respectively) was employed to study paramagnetic coordination defects in undoped hydrogenated amorphous silicon (a-Si:H). The improved spectral resolution at high magnetic field reveals a rhombic splitting of the g tensor with the following principal values: gx=2.0079, gy=2.0061, and gz=2.0034, and shows pronounced g strain, i.e., the principal values are widely distributed. The multifrequency approach furthermore yields precise 29Si hyperfine data. Density functional theory (DFT) calculations on 26 computer-generated a-Si:H dangling-bond models yielded g values close to the experimental data but deviating hyperfine interaction values. We show that paramagnetic coordination defects in a-Si:H are more delocalized than computer-generated dangling-bond defects and discuss models to explain this discrepancy.

  6. Do Hypervolumes Have Holes?

    PubMed

    Blonder, Benjamin

    2016-04-01

    Hypervolumes are used widely to conceptualize niches and trait distributions for both species and communities. Some hypervolumes are expected to be convex, with boundaries defined by only upper and lower limits (e.g., fundamental niches), while others are expected to be maximal, with boundaries defined by the limits of available space (e.g., potential niches). However, observed hypervolumes (e.g., realized niches) could also have holes, defined as unoccupied hyperspace representing deviations from these expectations that may indicate unconsidered ecological or evolutionary processes. Detecting holes in more than two dimensions has to date not been possible. I develop a mathematical approach, implemented in the hypervolume R package, to infer holes in large and high-dimensional data sets. As a demonstration analysis, I assess evidence for vacant niches in a Galapagos finch community on Isabela Island. These mathematical concepts and software tools for detecting holes provide approaches for addressing contemporary research questions across ecology and evolutionary biology.

  7. Respirable particulate monitoring with remote sensors. (Public health ecology: Air pollution)

    NASA Technical Reports Server (NTRS)

    Severs, R. K.

    1974-01-01

    The feasibility of monitoring atmospheric aerosols in the respirable range from air or space platforms was studied. Secondary reflectance targets were located in the industrial area and near Galveston Bay. Multichannel remote sensor data were utilized to calculate the aerosol extinction coefficient and thus determine the aerosol size distribution. Houston Texas air sampling network high volume data were utilized to generate computer isopleth maps of suspended particulates and to establish the mass loading of the atmosphere. In addition, a five channel nephelometer and a multistage particulate air sampler were used to collect data. The extinction coefficient determined from remote sensor data proved more representative of wide areal phenomena than that calculated from on site measurements. It was also demonstrated that a significant reduction in the standard deviation of the extinction coefficient could be achieved by reducing the bandwidths used in remote sensor.

  8. SeaWiFS Science Algorithm Flow Chart

    NASA Technical Reports Server (NTRS)

    Darzi, Michael

    1998-01-01

    This flow chart describes the baseline science algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Data Processing System (SDPS). As such, it includes only processing steps used in the generation of the operational products that are archived by NASA's Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC). It is meant to provide the reader with a basic understanding of the scientific algorithm steps applied to SeaWiFS data. It does not include non-science steps, such as format conversions, and places the greatest emphasis on the geophysical calculations of the level-2 processing. Finally, the flow chart reflects the logic sequences and the conditional tests of the software so that it may be used to evaluate the fidelity of the implementation of the scientific algorithm. In many cases however, the chart may deviate from the details of the software implementation so as to simplify the presentation.

  9. Network bandwidth utilization forecast model on high bandwidth networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wuchert; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less

  10. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less

  11. Isolation and Characterization of Polymorphic Microsatellite Loci from Metapenaeopsis barbata Using PCR-Based Isolation of Microsatellite Arrays (PIMA)

    PubMed Central

    Chiang, Tzen-Yuh; Tzeng, Tzong-Der; Lin, Hung-Du; Cho, Ching-Ju; Lin, Feng-Jiau

    2012-01-01

    The red-spot prawn, Metapenaeopsis barbata, is a commercially important, widely distributed demersal species in the Indo-West Pacific Ocean. Overfishing has made its populations decline in the past decade. To study conservation genetics, eight polymorphic microsatellite loci were isolated. Genetic characteristics of the SSR (simple sequence repeat) fingerprints were estimated in 61 individuals from adjacent seas of Taiwan and China. The number of alleles, ranging from 2 to 4, as well as observed and expected heterozygosities in populations, ranging from 0.048 to 0.538, and 0.048 and 0.654, respectively, were detected. No deviation from Hardy–Weinberg expectations was detected at either locus. No significant linkage disequilibrium was detected in locus pairs. The polymorphic microsatellite loci will be useful for investigations of the genetic variation, population structure, and conservation genetics of this species. PMID:22489123

  12. Scaling Deviations for Neutrino Reactions in Aysmptotically Free Field Theories

    DOE R&D Accomplishments Database

    Wilczek, F. A.; Zee, A.; Treiman, S. B.

    1974-11-01

    Several aspects of deep inelastic neutrino scattering are discussed in the framework of asymptotically free field theories. We first consider the growth behavior of the total cross sections at large energies. Because of the deviations from strict scaling which are characteristic of such theories the growth need not be linear. However, upper and lower bounds are established which rather closely bracket a linear growth. We next consider in more detail the expected pattern of scaling deviation for the structure functions and, correspondingly, for the differential cross sections. The analysis here is based on certain speculative assumptions. The focus is on qualitative effects of scaling breakdown as they may show up in the X and y distributions. The last section of the paper deals with deviations from the Callan-Gross relation.

  13. Vertebrate Left-Right Asymmetry: What Can Nodal Cascade Gene Expression Patterns Tell Us?

    PubMed

    Schweickert, Axel; Ott, Tim; Kurz, Sabrina; Tingler, Melanie; Maerker, Markus; Fuhl, Franziska; Blum, Martin

    2017-12-29

    Laterality of inner organs is a wide-spread characteristic of vertebrates and beyond. It is ultimately controlled by the left-asymmetric activation of the Nodal signaling cascade in the lateral plate mesoderm of the neurula stage embryo, which results from a cilia-driven leftward flow of extracellular fluids at the left-right organizer. This scenario is widely accepted for laterality determination in wildtype specimens. Deviations from this norm come in different flavors. At the level of organ morphogenesis, laterality may be inverted (situs inversus) or non-concordant with respect to the main body axis (situs ambiguus or heterotaxia). At the level of Nodal cascade gene activation, expression may be inverted, bilaterally induced, or absent. In a given genetic situation, patterns may be randomized or predominantly lacking laterality (absence or bilateral activation). We propose that the distributions of patterns observed may be indicative of the underlying molecular defects, with randomizations being primarily caused by defects in the flow-generating ciliary set-up, and symmetrical patterns being the result of impaired flow sensing, on the left, the right, or both sides. This prediction, the reasoning of which is detailed in this review, pinpoints functions of genes whose role in laterality determination have remained obscure.

  14. Optimal Operation and Management for Smart Grid Subsumed High Penetration of Renewable Energy, Electric Vehicle, and Battery Energy Storage System

    NASA Astrophysics Data System (ADS)

    Shigenobu, Ryuto; Noorzad, Ahmad Samim; Muarapaz, Cirio; Yona, Atsushi; Senjyu, Tomonobu

    2016-04-01

    Distributed generators (DG) and renewable energy sources have been attracting special attention in distribution systems in all over the world. Renewable energies, such as photovoltaic (PV) and wind turbine generators are considered as green energy. However, a large amount of DG penetration causes voltage deviation beyond the statutory range and reverse power flow at interconnection points in the distribution system. If excessive voltage deviation occurs, consumer's electric devices might break and reverse power flow will also has a negative impact on the transmission system. Thus, mass interconnections of DGs has an adverse effect on both of the utility and the customer. Therefore, reactive power control method is proposed previous research by using inverters attached DGs for prevent voltage deviations. Moreover, battery energy storage system (BESS) is also proposed for resolve reverse power flow. In addition, it is possible to supply high quality power for managing DGs and BESSs. Therefore, this paper proposes a method to maintain voltage, active power, and reactive power flow at interconnection points by using cooperative controlled of PVs, house BESSs, EVs, large BESSs, and existing voltage control devices. This paper not only protect distribution system, but also attain distribution loss reduction and effectivity management of control devices. Therefore mentioned control objectives are formulated as an optimization problem that is solved by using the Particle Swarm Optimization (PSO) algorithm. Modified scheduling method is proposed in order to improve convergence probability of scheduling scheme. The effectiveness of the proposed method is verified by case studies results and by using numerical simulations in MATLAB®.

  15. Sketching Curves for Normal Distributions--Geometric Connections

    ERIC Educational Resources Information Center

    Bosse, Michael J.

    2006-01-01

    Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…

  16. Standard deviation and standard error of the mean.

    PubMed

    Lee, Dong Kyu; In, Junyong; Lee, Sangseok

    2015-06-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.

  17. Standard deviation and standard error of the mean

    PubMed Central

    In, Junyong; Lee, Sangseok

    2015-01-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923

  18. A log-normal distribution model for the molecular weight of aquatic fulvic acids

    USGS Publications Warehouse

    Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.

    2000-01-01

    The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.

  19. Study on the propagation properties of laser in aerosol based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Leng, Kun; Wu, Wenyuan; Zhang, Xi; Gong, Yanchun; Yang, Yuntao

    2018-02-01

    When laser propagate in the atmosphere, due to aerosol scattering and absorption, laser energy will continue to decline, affecting the effectiveness of the laser effect. Based on the Monte Carlo method, the relationship between the photon spatial energy distributions of the laser wavelengths of 10.6μm in marine, sand-type, water-soluble and soot aerosols ,and the propagation distance, visibility and the divergence angle were studied. The results show that for 10.6μm laser, the maximum number of attenuation of photons arriving at the receiving plane is sand-type aerosol, the minimal attenuation is water soluble aerosol; as the propagation distance increases, the number of photons arriving at the receiving plane decreases; as the visibility increases, the number of photons arriving at the receiving plane increases rapidly and then stabilizes; in the above cases, the photon energy distribution does not deviated from the Gaussian distribution; as the divergence angle increases, the number of photons arriving at the receiving plane is almost unchanged, but the photon energy distribution gradually deviates from the Gaussian distribution.

  20. Bandwagon effects and error bars in particle physics

    NASA Astrophysics Data System (ADS)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  1. Role of environmental variability in the evolution of life history strategies.

    PubMed

    Hastings, A; Caswell, H

    1979-09-01

    We reexamine the role of environmental variability in the evolution of life history strategies. We show that normally distributed deviations in the quality of the environment should lead to normally distributed deviations in the logarithm of year-to-year survival probabilities, which leads to interesting consequences for the evolution of annual and perennial strategies and reproductive effort. We also examine the effects of using differing criteria to determine the outcome of selection. Some predictions of previous theory are reversed, allowing distinctions between r and K theory and a theory based on variability. However, these distinctions require information about both the environment and the selection process not required by current theory.

  2. Modified subaperture tool influence functions of a flat-pitch polisher with reverse-calculated material removal rate.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-04-10

    Numerical simulation of subaperture tool influence functions (TIF) is widely known as a critical procedure in computer-controlled optical surfacing. However, it may lack practicability in engineering because the emulation TIF (e-TIF) has some discrepancy with the practical TIF (p-TIF), and the removal rate could not be predicted by simulations. Prior to the polishing of a formal workpiece, opticians have to conduct TIF spot experiments on another sample to confirm the p-TIF with a quantitative removal rate, which is difficult and time-consuming for sequential polishing runs with different tools. This work is dedicated to applying these e-TIFs into practical engineering by making improvements from two aspects: (1) modifies the pressure distribution model of a flat-pitch polisher by finite element analysis and least square fitting methods to make the removal shape of e-TIFs closer to p-TIFs (less than 5% relative deviation validated by experiments); (2) predicts the removal rate of e-TIFs by reverse calculating the material removal volume of a pre-polishing run to the formal workpiece (relative deviations of peak and volume removal rate were validated to be less than 5%). This can omit TIF spot experiments for the particular flat-pitch tool employed and promote the direct usage of e-TIFs in the optimization of a dwell time map, which can largely save on cost and increase fabrication efficiency.

  3. Robust Gaussian Graphical Modeling via l1 Penalization

    PubMed Central

    Sun, Hokeun; Li, Hongzhe

    2012-01-01

    Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775

  4. Change in the Body Mass Index Distribution for Women: Analysis of Surveys from 37 Low- and Middle-Income Countries

    PubMed Central

    Razak, Fahad; Corsi, Daniel J.; SV Subramanian

    2013-01-01

    Background There are well-documented global increases in mean body mass index (BMI) and prevalence of overweight (BMI≥25.0 kg/m2) and obese (BMI≥30.0 kg/m2). Previous analyses, however, have failed to report whether this weight gain is shared equally across the population. We examined the change in BMI across all segments of the BMI distribution in a wide range of countries, and assessed whether the BMI distribution is changing between cross-sectional surveys conducted at different time points. Methods and Findings We used nationally representative surveys of women between 1991–2008, in 37 low- and middle-income countries from the Demographic Health Surveys ([DHS] n = 732,784). There were a total of 96 country-survey cycles, and the number of survey cycles per country varied between two (21/37) and five (1/37). Using multilevel regression models, between countries and within countries over survey cycles, the change in mean BMI was used to predict the standard deviation of BMI, the prevalence of underweight, overweight, and obese. Changes in median BMI were used to predict the 5th and 95th percentile of the BMI distribution. Quantile-quantile plots were used to examine the change in the BMI distribution between surveys conducted at different times within countries. At the population level, increasing mean BMI is related to increasing standard deviation of BMI, with the BMI at the 95th percentile rising at approximately 2.5 times the rate of the 5th percentile. Similarly, there is an approximately 60% excess increase in prevalence of overweight and 40% excess in obese, relative to the decline in prevalence of underweight. Quantile-quantile plots demonstrate a consistent pattern of unequal weight gain across percentiles of the BMI distribution as mean BMI increases, with increased weight gain at high percentiles of the BMI distribution and little change at low percentiles. Major limitations of these results are that repeated population surveys cannot examine weight gain within an individual over time, most of the countries only had data from two surveys and the study sample only contains women in low- and middle-income countries, potentially limiting generalizability of findings. Conclusions Mean changes in BMI, or in single parameters such as percent overweight, do not capture the divergence in the degree of weight gain occurring between BMI at low and high percentiles. Population weight gain is occurring disproportionately among groups with already high baseline BMI levels. Studies that characterize population change should examine patterns of change across the entire distribution and not just average trends or single parameters. Please see later in the article for the Editors' Summary PMID:23335861

  5. Quantitative angle-insensitive flow measurement using relative standard deviation OCT.

    PubMed

    Zhu, Jiang; Zhang, Buyun; Qi, Li; Wang, Ling; Yang, Qiang; Zhu, Zhuqing; Huo, Tiancheng; Chen, Zhongping

    2017-10-30

    Incorporating different data processing methods, optical coherence tomography (OCT) has the ability for high-resolution angiography and quantitative flow velocity measurements. However, OCT angiography cannot provide quantitative information of flow velocities, and the velocity measurement based on Doppler OCT requires the determination of Doppler angles, which is a challenge in a complex vascular network. In this study, we report on a relative standard deviation OCT (RSD-OCT) method which provides both vascular network mapping and quantitative information for flow velocities within a wide range of Doppler angles. The RSD values are angle-insensitive within a wide range of angles, and a nearly linear relationship was found between the RSD values and the flow velocities. The RSD-OCT measurement in a rat cortex shows that it can quantify the blood flow velocities as well as map the vascular network in vivo .

  6. Quantitative angle-insensitive flow measurement using relative standard deviation OCT

    NASA Astrophysics Data System (ADS)

    Zhu, Jiang; Zhang, Buyun; Qi, Li; Wang, Ling; Yang, Qiang; Zhu, Zhuqing; Huo, Tiancheng; Chen, Zhongping

    2017-10-01

    Incorporating different data processing methods, optical coherence tomography (OCT) has the ability for high-resolution angiography and quantitative flow velocity measurements. However, OCT angiography cannot provide quantitative information of flow velocities, and the velocity measurement based on Doppler OCT requires the determination of Doppler angles, which is a challenge in a complex vascular network. In this study, we report on a relative standard deviation OCT (RSD-OCT) method which provides both vascular network mapping and quantitative information for flow velocities within a wide range of Doppler angles. The RSD values are angle-insensitive within a wide range of angles, and a nearly linear relationship was found between the RSD values and the flow velocities. The RSD-OCT measurement in a rat cortex shows that it can quantify the blood flow velocities as well as map the vascular network in vivo.

  7. Influence of asymmetrical drawing radius deviation in micro deep drawing

    NASA Astrophysics Data System (ADS)

    Heinrich, L.; Kobayashi, H.; Shimizu, T.; Yang, M.; Vollertsen, F.

    2017-09-01

    Nowadays, an increasing demand for small metal parts in electronic and automotive industries can be observed. Deep drawing is a well-suited technology for the production of such parts due to its excellent qualities for mass production. However, the downscaling of the forming process leads to new challenges in tooling and process design, such as high relative deviation of tool geometry or blank displacement compared to the macro scale. FEM simulation has been a widely-used tool to investigate the influence of symmetrical process deviations as for instance a global variance of the drawing radius. This study shows a different approach that allows to determine the impact of asymmetrical process deviations on micro deep drawing. In this particular case the impact of an asymmetrical drawing radius deviation and blank displacement on cup geometry deviation was investigated for different drawing ratios by experiments and FEM simulation. It was found that both variations result in an increasing cup height deviation. Nevertheless, with increasing drawing ratio a constant drawing radius deviation has an increasing impact, while blank displacement results in a decreasing offset of the cups geometry. This is explained by different mechanisms that result in an uneven cup geometry. While blank displacement leads to material surplus on one side of the cup, an unsymmetrical radius deviation on the other hand generates uneven stretching of the cups wall. This is intensified for higher drawing ratios. It can be concluded that the effect of uneven radius geometry proves to be of major importance for the production of accurately shaped micro cups and cannot be compensated by intentional blank displacement.

  8. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    PubMed

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  9. Ethiopian adolescents' attitudes and expectations deviate from current infant and young child feeding recommendations.

    PubMed

    Hadley, Craig; Lindstrom, David; Belachew, Tefera; Tessema, Fasil

    2008-09-01

    Suboptimal infant and child feeding practices are highly prevalent in many developing countries for reasons that are not entirely understood. Taking an anthropological perspective, we assessed whether nulliparous youth have formulated attitudes and expectations in the domain of infant and child feeding behaviors, the extent to which these varied by location and gender, and the extent to which they deviated from current international recommendations. A population-based sample of 2077 adolescent girls and boys (13-17 years) in southwest Ethiopia answered a questionnaire on infant and young child feeding behaviors. Results indicate high levels of agreement among adolescents on items relating to infant and young child feeding behaviors. Attitudes and intentions deviated widely from current international recommendations. Youth overwhelmingly endorsed items related to early introduction of nonbreast milk liquids and foods. For girls, fewer than 11% agreed that a 5-month infant should be exclusively breastfed and only 26% agreed that a 6-month infant should be consuming some animal source foods. Few sex differences emerged and youth responses matched larger community patterns. The results indicate that attitudes and expectations deviate widely from current international child feeding guidelines among soon to be parents. To the extent that youth models are directive, these findings suggest that youth enter into parenthood with suboptimal information about infant and child feeding. Such information will reproduce poor health across generations as the largest cohort of adolescents ever become parents. These results suggest specific points of entry for adolescent nutrition education interventions.

  10. Analysis and modeling of optical crosstalk in InP-based Geiger-mode avalanche photodiode FPAs

    NASA Astrophysics Data System (ADS)

    Chau, Quan; Jiang, Xudong; Itzler, Mark A.; Entwistle, Mark; Piccione, Brian; Owens, Mark; Slomkowski, Krystyna

    2015-05-01

    Optical crosstalk is a major factor limiting the performance of Geiger-mode avalanche photodiode (GmAPD) focal plane arrays (FPAs). This is especially true for arrays with increased pixel density and broader spectral operation. We have performed extensive experimental and theoretical investigations on the crosstalk effects in InP-based GmAPD FPAs for both 1.06-μm and 1.55-μm applications. Mechanisms responsible for intrinsic dark counts are Poisson processes, and their inter-arrival time distribution is an exponential function. In FPAs, intrinsic dark counts and cross talk events coexist, and the inter-arrival time distribution deviates from purely exponential behavior. From both experimental data and computer simulations, we show the dependence of this deviation on the crosstalk probability. The spatial characteristics of crosstalk are also demonstrated. From the temporal and spatial distribution of crosstalk, an efficient algorithm to identify and quantify crosstalk is introduced.

  11. A Monte Carlo simulation of the effect of ion self-collisions on the ion velocity distribution function in the high-latitude F-region

    NASA Technical Reports Server (NTRS)

    Barghouthi, I. A.; Barakat, A. R.; Schunk, R. W.

    1994-01-01

    Non-Maxwellian ion velocity distribution functions have been theoretically predicted and confirmed by observations, to occur at high latitudes. These distributions deviate from Maxwellian due to the combined effect of the E x B drift and ion-neutral collisions. At high altitude and/or for solar maximum conditions, the ion-to-neutral density ratio increases and, hence, the role of ion self-collisions becomes appreciable. A Monte Carlo simulation was used to investigate the behavior of O(+) ions that are E x B-drifting through a background of neutral O, with the effect of O(+) (Coulomb) self-collisions included. Wide ranges of the ion-to-neutral density ratio n(sub i)/n(sub n) and the electrostatic field E were considered in order to investigate the change of ion behavior with solar cycle and with altitude. For low altitudes and/or solar minimum (n(sub i)/n(sub n) less than or equal to 10(exp -5)), the effect of self-collisions is negligible. For higher values of n(sub i)/n(sub n), the effect of self-collisions becomes significant and, hence, the non-Maxwellian features of the O(+) distribution are reduced. The Monte Carlo results were compared to those that used simplified collision models in order to assess their validity. In general, the simple collision models tend to be more accurate for low E and for high n(sub i)/n(sub n).

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ke; Zhang, Yanwen; Zhu, Zihua

    Accurate information of electronic stopping power is fundamental for broad advances in electronic industry, space exploration, national security, and sustainable energy technologies. The Stopping and Range of Ions in Matter (SRIM) code has been widely applied to predict stopping powers and ion distributions for decades. Recent experimental results have, however, shown considerable errors in the SRIM predictions for stopping of heavy ions in compounds containing light elements, indicating an urgent need to improve current stopping power models. The electronic stopping powers of 35Cl, 80Br, 127I, and 197Au ions are experimentally determined in two important functional materials, SiC and SiO2, frommore » tens to hundreds keV/u based on a single ion technique. By combining with the reciprocity theory, new electronic stopping powers are suggested in a region from 0 to 15 MeV, where large deviations from SRIM predictions are observed. For independent experimental validation of the electronic stopping powers we determined, Rutherford backscattering spectrometry (RBS) and secondary ion mass spectrometry (SIMS) are utilized to measure the depth profiles of implanted Au ions in SiC with energies from 700 keV to 15 MeV. The measured ion distributions from both RBS and SIMS are considerably deeper (up to ~30%) than the predictions from the commercial SRIM code. In comparison, the new electronic stopping power values are utilized in a modified TRIM-85 (the original version of the SRIM) code, M-TRIM, to predict ion distributions, and the results are in good agreement with the experimentally measured ion distributions.« less

  13. Asymmetrical intrapleural pressure distribution: a cause for scoliosis? A computational analysis.

    PubMed

    Schlager, Benedikt; Niemeyer, Frank; Galbusera, Fabio; Wilke, Hans-Joachim

    2018-04-13

    The mechanical link between the pleural physiology and the development of scoliosis is still unresolved. The intrapleural pressure (IPP) which is distributed across the inner chest wall has yet been widely neglected in etiology debates. With this study, we attempted to investigate the mechanical influence of the IPP distribution on the shape of the spinal curvature. A finite element model of pleura, chest and spine was created based on CT data of a patient with no visual deformities. Different IPP distributions at a static end of expiration condition were investigated, such as the influence of an asymmetry in the IPP distribution between the left and right hemithorax. The results were then compared to clinical data. The application of the IPP resulted in a compressive force of 22.3 N and a flexion moment of 2.8 N m at S1. An asymmetrical pressure between the left and right hemithorax resulted in lateral deviation of the spine towards the side of the reduced negative pressure. In particular, the pressure within the dorsal section of the rib cage had a strong influence on the vertebral rotation, while the pressure in medial and ventral region affected the lateral displacement. An asymmetrical IPP caused spinal deformation patterns which were comparable to deformation patterns seen in scoliotic spines. The calculated reaction forces suggest that the IPP contributes in counterbalancing the weight of the intrathoracic organs. The study confirms the potential relevance of the IPP for spinal biomechanics and pathologies, such as adolescent idiopathic scoliosis.

  14. Evaluation and validity of a LORETA normative EEG database.

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-04-01

    To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.

  15. Luminosity distance in ``Swiss cheese'' cosmology with randomized voids. II. Magnification probability distributions

    NASA Astrophysics Data System (ADS)

    Flanagan, Éanna É.; Kumar, Naresh; Wasserman, Ira; Vanderveld, R. Ali

    2012-01-01

    We study the fluctuations in luminosity distances due to gravitational lensing by large scale (≳35Mpc) structures, specifically voids and sheets. We use a simplified “Swiss cheese” model consisting of a ΛCDM Friedman-Robertson-Walker background in which a number of randomly distributed nonoverlapping spherical regions are replaced by mass-compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. We compute the distribution of magnitude shifts using a variant of the method of Holz and Wald , which includes the effect of lensing shear. The standard deviation of this distribution is ˜0.027 magnitudes and the mean is ˜0.003 magnitudes for voids of radius 35 Mpc, sources at redshift zs=1.0, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. If the shell walls are given a finite thickness of ˜1Mpc, the standard deviation is reduced to ˜0.013 magnitudes. This standard deviation due to voids is a factor ˜3 smaller than that due to galaxy scale structures. We summarize our results in terms of a fitting formula that is accurate to ˜20%, and also build a simplified analytic model that reproduces our results to within ˜30%. Our model also allows us to explore the domain of validity of weak-lensing theory for voids. We find that for 35 Mpc voids, corrections to the dispersion due to lens-lens coupling are of order ˜4%, and corrections due to shear are ˜3%. Finally, we estimate the bias due to source-lens clustering in our model to be negligible.

  16. Sample size determination in combinatorial chemistry.

    PubMed Central

    Zhao, P L; Zambias, R; Bolognese, J A; Boulton, D; Chapman, K

    1995-01-01

    Combinatorial chemistry is gaining wide appeal as a technique for generating molecular diversity. Among the many combinatorial protocols, the split/recombine method is quite popular and particularly efficient at generating large libraries of compounds. In this process, polymer beads are equally divided into a series of pools and each pool is treated with a unique fragment; then the beads are recombined, mixed to uniformity, and redivided equally into a new series of pools for the subsequent couplings. The deviation from the ideal equimolar distribution of the final products is assessed by a special overall relative error, which is shown to be related to the Pearson statistic. Although the split/recombine sampling scheme is quite different from those used in analysis of categorical data, the Pearson statistic is shown to still follow a chi2 distribution. This result allows us to derive the required number of beads such that, with 99% confidence, the overall relative error is controlled to be less than a pregiven tolerable limit L1. In this paper, we also discuss another criterion, which determines the required number of beads so that, with 99% confidence, all individual relative errors are controlled to be less than a pregiven tolerable limit L2 (0 < L2 < 1). PMID:11607586

  17. Initial Development of a Modified Trail Making Test for Individuals with Impaired Manual Functioning.

    PubMed

    Rane, Shruti; Caroselli, Jerome Silvio; Dickinson, Mercedes; Tran, Kim; Kuang, Fanny; Hiscock, Merrill

    2016-01-01

    The Trail Making Test (TMT), a widely used neuropsychological test, is highly effective in detecting brain damage. A shortcoming of the test is that it requires drawing lines and thus is impractical for use with persons suffering manual impairment. The 3 studies described herein were designed to describe and evaluate a nonmanual Trail Making Test (NMTMT) that would be suitable for use with manually impaired individuals. The NMTMT utilizes color to permit oral reporting of the stimuli constituting a series of numbers (Part A) or alternating series of numbers and letters (Part B). The studies, which involved a total of 200 university students, indicate that the standard TMT and the NMTMT are moderately related to each other and have similar patterns of association and nonassociation with other neuropsychological measures. Participants with scores falling near the bottom of the NMTMT distribution have a high probability of scoring at least 1 standard deviation below the mean of the TMT distribution for Part B. The clinically important relationship of Part A to Part B seems to be retained in the NMTMT. It is concluded that the NMTMT shows promise as a substitute for the TMT when the TMT cannot be used.

  18. New UBVRI colour distributions in E-type galaxies . I. The data

    NASA Astrophysics Data System (ADS)

    Idiart, T. P.; Michard, R.; de Freitas Pacheco, J. A.

    2002-01-01

    New colour distributions have been derived from wide field UBVRI frames for 36 northern bright elliptical galaxies and a few lenticulars. The classical linear representations of colours against log r were derived, with some improvements in the accuracy of the zero point colours and of the gradients. The radial range of significant measurements was enlarged both towards the galaxy center and towards the outskirts of each object. Thus, the ``central colours'', integrated within a radius of 3\\arcsec, and the ``outermost colours'' averaged near the muV =24 surface brightness could also be obtained. Some typical deviations of colour profiles from linearity are described. Colour-colour relations of interest are presented. Very tight correlations are found between the U-V colour and the Mg2 line-index, measured either at the Galaxian center or at the effective radius. Based in part on observations collected at the Observatoire de Haute-Provence. Tables 9-11 plus detailed tables for each object are available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/383/30

  19. Atomistic simulation on charge mobility of amorphous tris(8-hydroxyquinoline) aluminum (Alq3): origin of Poole-Frenkel-type behavior.

    PubMed

    Nagata, Yuki; Lennartz, Christian

    2008-07-21

    The atomistic simulation of charge transfer process for an amorphous Alq(3) system is reported. By employing electrostatic potential charges, we calculate site energies and find that the standard deviation of site energy distribution is about twice as large as predicted in previous research. The charge mobility is calculated via the Miller-Abrahams formalism and the master equation approach. We find that the wide site energy distribution governs Poole-Frenkel-type behavior of charge mobility against electric field, while the spatially correlated site energy is not a dominant mechanism of Poole-Frenkel behavior in the range from 2x10(5) to 1.4x10(6) V/cm. Also we reveal that randomly meshed connectivities are, in principle, required to account for the Poole-Frenkel mechanism. Charge carriers find a zigzag pathway at low electric field, while they find a straight pathway along electric field when a high electric field is applied. In the space-charge-limited current scheme, the charge-carrier density increases with electric field strength so that the nonlinear behavior of charge mobility is enhanced through the strong charge-carrier density dependence of charge mobility.

  20. Experimental characterization of the transition to coherence collapse in a semiconductor laser with optical feedback

    NASA Astrophysics Data System (ADS)

    Panozzo, M.; Quintero-Quiroz, C.; Tiana-Alsina, J.; Torrent, M. C.; Masoller, C.

    2017-11-01

    Semiconductor lasers with time-delayed optical feedback display a wide range of dynamical regimes, which have found various practical applications. They also provide excellent testbeds for data analysis tools for characterizing complex signals. Recently, several of us have analyzed experimental intensity time-traces and quantitatively identified the onset of different dynamical regimes, as the laser current increases. Specifically, we identified the onset of low-frequency fluctuations (LFFs), where the laser intensity displays abrupt dropouts, and the onset of coherence collapse (CC), where the intensity fluctuations are highly irregular. Here we map these regimes when both, the laser current and the feedback strength vary. We show that the shape of the distribution of intensity fluctuations (characterized by the standard deviation, the skewness, and the kurtosis) allows to distinguish among noise, LFFs and CC, and to quantitatively determine (in spite of the gradual nature of the transitions) the boundaries of the three regimes. Ordinal analysis of the inter-dropout time intervals consistently identifies the three regimes occurring in the same parameter regions as the analysis of the intensity distribution. Simulations of the well-known time-delayed Lang-Kobayashi model are in good qualitative agreement with the observations.

  1. A better norm-referenced grading using the standard deviation criterion.

    PubMed

    Chan, Wing-shing

    2014-01-01

    The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.

  2. Measurement variability error for estimates of volume change

    Treesearch

    James A. Westfall; Paul L. Patterson

    2007-01-01

    Using quality assurance data, measurement variability distributions were developed for attributes that affect tree volume prediction. Random deviations from the measurement variability distributions were applied to 19381 remeasured sample trees in Maine. The additional error due to measurement variation and measurement bias was estimated via a simulation study for...

  3. Perception of midline deviations in smile esthetics by laypersons.

    PubMed

    Ferreira, Jamille Barros; Silva, Licínio Esmeraldo da; Caetano, Márcia Tereza de Oliveira; Motta, Andrea Fonseca Jardim da; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson

    2016-01-01

    To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student's t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation.

  4. ALTERED PHALANX FORCE DIRECTION DURING POWER GRIP FOLLOWING STROKE

    PubMed Central

    Enders, Leah R.

    2015-01-01

    Many stroke survivors with severe impairment can grasp only with a power grip. Yet, little knowledge is available on altered power grip after stroke, other than reduced power grip strength. This study characterized stroke survivors’ static power grip during 100% and 50% maximum grip. Each phalanx force’s angular deviation from the normal direction and its contribution to total normal force was compared for 11 stroke survivors and 11 age-matched controls. Muscle activities and skin coefficient of friction (COF) were additionally compared for another 20 stroke and 13 age-matched control subjects. The main finding was that stroke survivors gripped with a 34% greater phalanx force angular deviation of 19±2° compared to controls of 14±1° (p<.05). Stroke survivors’ phalanx force angular deviation was closer to the 23° threshold of slippage between the phalanx and grip surface, which may explain increased likelihood of object dropping in stroke survivors. In addition, this altered phalanx force direction decreases normal grip force by tilting the force vector, indicating a partial role of phalanx force angular deviation in reduced grip strength post stroke. Greater phalanx force angular deviation may biomechanically result from more severe underactivation of stroke survivors’ first dorsal interosseous (FDI) and extensor digitorum communis (EDC) muscles compared to their flexor digitorum superficialis (FDS) or somatosensory deficit. While stroke survivors’ maximum power grip strength was approximately half of the controls’, the distribution of their remaining strength over the fingers and phalanges did not differ, indicating evenly distributed grip force reduction over the entire hand. PMID:25795079

  5. Distributed activation energy model parameters of some Turkish coals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunes, M.; Gunes, S.K.

    2008-07-01

    A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.

  6. Evaluation of illumination system uniformity for wide-field biomedical hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Sawyer, Travis W.; Siri Luthman, A.; E Bohndiek, Sarah

    2017-04-01

    Hyperspectral imaging (HSI) systems collect both spatial (morphological) and spectral (chemical) information from a sample. HSI can provide sensitive analysis for biological and medical applications, for example, simultaneously measuring reflectance and fluorescence properties of a tissue, which together with structural information could improve early cancer detection and tumour characterisation. Illumination uniformity is a critical pre-condition for quantitative data extraction from an HSI system. Non-uniformity can cause glare, specular reflection and unwanted shading, which negatively impact statistical analysis procedures used to extract abundance of different chemical species. Here, we model and evaluate several illumination systems frequently used in wide-field biomedical imaging to test their potential for HSI. We use the software LightTools and FRED. The analysed systems include: a fibre ring light; a light emitting diode (LED) ring; and a diffuse scattering dome. Each system is characterised for spectral, spatial, and angular uniformity, as well as transfer efficiency. Furthermore, an approach to measure uniformity using the Kullback-Leibler divergence (KLD) is introduced. The KLD is generalisable to arbitrary illumination shapes, making it an attractive approach for characterising illumination distributions. Although the systems are quite comparable in their spatial and spectral uniformity, the most uniform angular distribution is achieved using a diffuse scattering dome, yielding a contrast of 0.503 and average deviation of 0.303 over a ±60° field of view with a 3.9% model error in the angular domain. Our results suggest that conventional illumination sources can be applied in HSI, but in the case of low light levels, bespoke illumination sources may offer improved performance.

  7. A three-dimensional code for muon propagation through the rock: MUSIC

    NASA Astrophysics Data System (ADS)

    Antonioli, P.; Ghetti, C.; Korolkova, E. V.; Kudryavtsev, V. A.; Sartorelli, G.

    1997-10-01

    We present a new three-dimensional Monte-Carlo code MUSIC (MUon SImulation Code) for muon propagation through the rock. All processes of muon interaction with matter with high energy loss (including the knock-on electron production) are treated as stochastic processes. The angular deviation and lateral displacement of muons due to multiple scattering, as well as bremsstrahlung, pair production and inelastic scattering are taken into account. The code has been applied to obtain the energy distribution and angular and lateral deviations of single muons at different depths underground. The muon multiplicity distributions obtained with MUSIC and CORSIKA (Extensive Air Shower simulation code) are also presented. We discuss the systematic uncertainties of the results due to different muon bremsstrahlung cross-sections.

  8. Simulation of alnico coercivity

    DOE PAGES

    Ke, Liqin; Skomski, Ralph; Hoffmann, Todd D.; ...

    2017-07-10

    Micromagnetic simulations of alnico show substantial deviations from Stoner-Wohlfarth behavior due to the unique size and spatial distribution of the rod-like Fe-Co phase formed during spinodal decomposition in an external magnetic field. Furthemore, the maximum coercivity is limited by single-rod effects, especially deviations from ellipsoidal shape, and by interactions between the rods. In both the exchange interaction between connected rods and magnetostatic we consider the interaction between rods, and the results of our calculations show good agreement with recent experiments. Unlike systems dominated by magnetocrystalline anisotropy, coercivity in alnico is highly dependent on size, shape, and geometric distribution of themore » Fe-Co phase, all factors that can be tuned with appropriate chemistry and thermal-magnetic annealing.« less

  9. System statistical reliability model and analysis

    NASA Technical Reports Server (NTRS)

    Lekach, V. S.; Rood, H.

    1973-01-01

    A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.

  10. Distribution and motions of atomic hydrogen in lenticular galaxies. X - The blue S0 galaxy NGC 5102

    NASA Technical Reports Server (NTRS)

    Van Woerden, H.; Van Driel, W.; Braun, R.; Rots, A. H.

    1993-01-01

    Results of the mapping of the blue gas-rich S0 galaxy NGC 5102 in the 21-cm H I line with a spatial resolution of 34 x 37 arcsec (delta(alpha) x Delta(delta)) and a velocity resolution of 12 km/s are presented. The H I distribution has a pronounced central depression of 1.9 kpc radius, and most of the H I is concentrated in a 3.6 kpc wide ring with an average radius of 3.7 kpc, assuming a distance of 4 Mpc for NGC 5102. The maximum azimuthally averaged H I surface density in the ring is 1.4 solar mass/sq pc, comparable to that found in other S0 galaxies. The HI velocity field is quite regular, showing no evidence for large-scale deviations from circular rotation, and the H I is found to rotate in the plane of the stellar disk. Both the H I mass/blue luminosity ratio and the radial H I distribution are similar to those in early-type spirals. The H I may be an old disk or it may have been acquired through capture of a gas-rich smaller galaxy. The recent starburst in the nuclear region, which gave the galaxy its blue color, may have been caused by partial radial collapse of the gas disk, or by infall of a gas-rich dwarf galaxy.

  11. Prediction of Chain Propagation Rate Constants of Polymerization Reactions in Aqueous NIPAM/BIS and VCL/BIS Systems.

    PubMed

    Kröger, Leif C; Kopp, Wassja A; Leonhard, Kai

    2017-04-06

    Microgels have a wide range of possible applications and are therefore studied with increasing interest. Nonetheless, the microgel synthesis process and some of the resulting properties of the microgels, such as the cross-linker distribution within the microgels, are not yet fully understood. An in-depth understanding of the synthesis process is crucial for designing tailored microgels with desired properties. In this work, rate constants and reaction enthalpies of chain propagation reactions in aqueous N-isopropylacrylamide/N,N'-methylenebisacrylamide and aqueous N-vinylcaprolactam/N,N'-methylenebisacrylamide systems are calculated to identify the possible sources of an inhomogeneous cross-linker distribution in the resulting microgels. Gas-phase reaction rate constants are calculated from B2PLYPD3/aug-cc-pVTZ energies and B3LYPD3/tzvp geometries and frequencies. Then, solvation effects based on COSMO-RS are incorporated into the rate constants to obtain the desired liquid-phase reaction rate constants. The rate constants agree with experiments within a factor of 2-10, and the reaction enthalpies deviate less than 5 kJ/mol. Further, the effect of rate constants on the microgel growth process is analyzed, and it is shown that differences in the magnitude of the reaction rate constants are a source of an inhomogeneous cross-linker distribution within the resulting microgel.

  12. Importance of correlations and fluctuations on the initial source eccentricity in high-energy nucleus-nucleus collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alver, B.; Ballintijn, M.; Busza, W.

    2008-01-15

    In relativistic heavy-ion collisions, anisotropic collective flow is driven, event by event, by the initial eccentricity of the matter created in the nuclear overlap zone. Interpretation of the anisotropic flow data thus requires a detailed understanding of the effective initial source eccentricity of the event sample. In this paper, we investigate various ways of defining this effective eccentricity using the Monte Carlo Glauber (MCG) approach. In particular, we examine the participant eccentricity, which quantifies the eccentricity of the initial source shape by the major axes of the ellipse formed by the interaction points of the participating nucleons. We show thatmore » reasonable variation of the density parameters in the Glauber calculation, as well as variations in how matter production is modeled, do not significantly modify the already established behavior of the participant eccentricity as a function of collision centrality. Focusing on event-by-event fluctuations and correlations of the distributions of participating nucleons, we demonstrate that, depending on the achieved event-plane resolution, fluctuations in the elliptic flow magnitude v{sub 2} lead to most measurements being sensitive to the root-mean-square rather than the mean of the v{sub 2} distribution. Neglecting correlations among participants, we derive analytical expressions for the participant eccentricity cumulants as a function of the number of participating nucleons, N{sub part}, keeping nonnegligible contributions up to O(1/N{sub part}{sup 3}). We find that the derived expressions yield the same results as obtained from mixed-event MCG calculations which remove the correlations stemming from the nuclear collision process. Most importantly, we conclude from the comparison with MCG calculations that the fourth-order participant eccentricity cumulant does not approach the spatial anisotropy obtained assuming a smooth nuclear matter distribution. In particular, for the Cu+Cu system, these quantities deviate from each other by almost a factor of 2 over a wide range in centrality. This deviation reflects the essential role of participant spatial correlations in the interaction of two nuclei.« less

  13. An empirical determination of the minimum number of measurements needed to estimate the mean random vitrinite reflectance of disseminated organic matter

    USGS Publications Warehouse

    Barker, C.E.; Pawlewicz, M.J.

    1993-01-01

    In coal samples, published recommendations based on statistical methods suggest 100 measurements are needed to estimate the mean random vitrinite reflectance (Rv-r) to within ??2%. Our survey of published thermal maturation studies indicates that those using dispersed organic matter (DOM) mostly have an objective of acquiring 50 reflectance measurements. This smaller objective size in DOM versus that for coal samples poses a statistical contradiction because the standard deviations of DOM reflectance distributions are typically larger indicating a greater sample size is needed to accurately estimate Rv-r in DOM. However, in studies of thermal maturation using DOM, even 50 measurements can be an unrealistic requirement given the small amount of vitrinite often found in such samples. Furthermore, there is generally a reduced need for assuring precision like that needed for coal applications. Therefore, a key question in thermal maturation studies using DOM is how many measurements of Rv-r are needed to adequately estimate the mean. Our empirical approach to this problem is to compute the reflectance distribution statistics: mean, standard deviation, skewness, and kurtosis in increments of 10 measurements. This study compares these intermediate computations of Rv-r statistics with a final one computed using all measurements for that sample. Vitrinite reflectance was measured on mudstone and sandstone samples taken from borehole M-25 in the Cerro Prieto, Mexico geothermal system which was selected because the rocks have a wide range of thermal maturation and a comparable humic DOM with depth. The results of this study suggest that after only 20-30 measurements the mean Rv-r is generally known to within 5% and always to within 12% of the mean Rv-r calculated using all of the measured particles. Thus, even in the worst case, the precision after measuring only 20-30 particles is in good agreement with the general precision of one decimal place recommended for mean Rv-r measurements on DOM. The coefficient of variation (V = standard deviation/mean) is proposed as a statistic to indicate the reliability of the mean Rv-r estimates made at n ??? 20. This preliminary study suggests a V 0.2 suggests an unreliable mean in such small samples. ?? 1993.

  14. Diagnostic Ability of Wide-field Retinal Nerve Fiber Layer Maps Using Swept-Source Optical Coherence Tomography for Detection of Preperimetric and Early Perimetric Glaucoma.

    PubMed

    Lee, Won June; Na, Kyeong Ik; Kim, Young Kook; Jeoung, Jin Wook; Park, Ki Ho

    2017-06-01

    To evaluate the diagnostic ability of wide-field retinal nerve fiber layer (RNFL) maps with swept-source optical coherence tomography (SS-OCT) for detection of preperimetric (PPG) and early perimetric glaucoma (EG). One hundred eighty-four eyes, including 67 healthy eyes, 43 eyes with PPG, and 74 eyes with EG, were analyzed. Patients underwent a comprehensive ocular examination including red-free RNFL photography, visual field testing and wide-field SS-OCT scanning (DRI-OCT-1 Atlantis; Topcon, Tokyo, Japan). SS-OCT provides a wide-field RNFL thickness map and a SuperPixel map, which are composed of the RNFL deviation map of the peripapillary area and the deviation map of the composition of the ganglion cell layer with the inner plexiform layer and RNFL (GC-IPL+RNFL) in the macular area. The ability to discriminate PPG and EG from healthy eyes was assessed using sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) for all parameters and criteria provided by the wide-field SS-OCT scan. The wide-field RNFL thickness map using SS-OCT showed the highest sensitivity of PPG-diagnostic and EG-diagnostic performance compared with the other SS-OCT criteria based on the internal normative base (93.0 and 97.3%, respectively). Among the SS-OCT continuous parameters, the RFNL thickness of the 7 clock-hour, inferior and inferotemporal macular ganglion cell analyses showed the largest AUC of PPG-diagnostic and EG-diagnostic performance (AUC=0.809 to 0.865). The wide-field RNFL thickness map using SS-OCT performed well in distinguishing eyes with PPG and EG from healthy eyes. In the clinical setting, wide-field RNFL maps of SS-OCT can be useful tools for detection of early-stage glaucoma.

  15. A Spatio-Temporal Approach for Global Validation and Analysis of MODIS Aerosol Products

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Chu, D. Allen; Mattoo, Shana; Kaufman, Yoram J.; Remer, Lorraine A.; Tanre, Didier; Slutsker, Ilya; Holben, Brent N.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    With the launch of the MODIS sensor on the Terra spacecraft, new data sets of the global distribution and properties of aerosol are being retrieved, and need to be validated and analyzed. A system has been put in place to generate spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of the MODIS aerosol parameters over more than 100 validation sites spread around the globe. Corresponding statistics are also computed from temporal subsets of AERONET-derived aerosol data. The means and standard deviations of identical parameters from MOMS and AERONET are compared. Although, their means compare favorably, their standard deviations reveal some influence of surface effects on the MODIS aerosol retrievals over land, especially at low aerosol loading. The direction and rate of spatial variation from MODIS are used to study the spatial distribution of aerosols at various locations either individually or comparatively. This paper introduces the methodology for generating and analyzing the data sets used by the two MODIS aerosol validation papers in this issue.

  16. Second-order (2 +1 ) -dimensional anisotropic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bazow, Dennis; Heinz, Ulrich; Strickland, Michael

    2014-11-01

    We present a complete formulation of second-order (2 +1 ) -dimensional anisotropic hydrodynamics. The resulting framework generalizes leading-order anisotropic hydrodynamics by allowing for deviations of the one-particle distribution function from the spheroidal form assumed at leading order. We derive complete second-order equations of motion for the additional terms in the macroscopic currents generated by these deviations from their kinetic definition using a Grad-Israel-Stewart 14-moment ansatz. The result is a set of coupled partial differential equations for the momentum-space anisotropy parameter, effective temperature, the transverse components of the fluid four-velocity, and the viscous tensor components generated by deviations of the distribution from spheroidal form. We then perform a quantitative test of our approach by applying it to the case of one-dimensional boost-invariant expansion in the relaxation time approximation (RTA) in which case it is possible to numerically solve the Boltzmann equation exactly. We demonstrate that the second-order anisotropic hydrodynamics approach provides an excellent approximation to the exact (0+1)-dimensional RTA solution for both small and large values of the shear viscosity.

  17. Parameter estimation method and updating of regional prediction equations for ungaged sites in the desert region of California

    USGS Publications Warehouse

    Barth, Nancy A.; Veilleux, Andrea G.

    2012-01-01

    The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.

  18. Distribution of diameters for Erdős-Rényi random graphs.

    PubMed

    Hartmann, A K; Mézard, M

    2018-03-01

    We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c. The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P(d) numerically for various values of c, in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10^{-100} which allow us to obtain the distribution over basically the full range of the support, for graphs up to N=1000 nodes. For values c<1, our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c>1 the distribution is more complex and no complete analytical results are available. For this parameter range, P(d) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c, we determined the finite-size rate function Φ(d/N) and were able to extrapolate numerically to N→∞, indicating that the large-deviation principle holds.

  19. Distribution of diameters for Erdős-Rényi random graphs

    NASA Astrophysics Data System (ADS)

    Hartmann, A. K.; Mézard, M.

    2018-03-01

    We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c . The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P (d ) numerically for various values of c , in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10-100 which allow us to obtain the distribution over basically the full range of the support, for graphs up to N =1000 nodes. For values c <1 , our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c >1 the distribution is more complex and no complete analytical results are available. For this parameter range, P (d ) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c , we determined the finite-size rate function Φ (d /N ) and were able to extrapolate numerically to N →∞ , indicating that the large-deviation principle holds.

  20. A Monte Carlo Approach to Unidimensionality Testing in Polytomous Rasch Models

    ERIC Educational Resources Information Center

    Christensen, Karl Bang; Kreiner, Svend

    2007-01-01

    Many statistical tests are designed to test the different assumptions of the Rasch model, but only few are directed at detecting multidimensionality. The Martin-Lof test is an attractive approach, the disadvantage being that its null distribution deviates strongly from the asymptotic chi-square distribution for most realistic sample sizes. A Monte…

  1. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests.

    PubMed

    Pleil, Joachim D

    2016-01-01

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.

  2. Equity theory and fair inequality: a neuroeconomic study.

    PubMed

    Cappelen, Alexander W; Eichele, Tom; Hugdahl, Kenneth; Specht, Karsten; Sørensen, Erik Ø; Tungodden, Bertil

    2014-10-28

    The present paper reports results from, to our knowledge, the first study designed to examine the neuronal responses to income inequality in situations in which individuals have made different contributions in terms of work effort. We conducted an experiment that included a prescanning phase in which the participants earned money by working, and a neuronal scanning phase in which we examined how the brain responded when the participants evaluated different distributions of their earnings. We provide causal evidence for the relative contribution of work effort being crucial for understanding the hemodynamic response in the brain to inequality. We found a significant hemodynamic response in the striatum to deviations from the distribution of income that was proportional to work effort, but found no effect of deviations from the equal distribution of income. We also observed a striking correlation between the hemodynamic response in the striatum and the self-reported evaluation of the income distributions. Our results provide, to our knowledge, the first set of neuronal evidence for equity theory and suggest that people distinguish between fair and unfair inequalities.

  3. Extreme statistics and index distribution in the classical 1d Coulomb gas

    NASA Astrophysics Data System (ADS)

    Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory

    2018-07-01

    We consider a 1D gas of N charged particles confined by an external harmonic potential and interacting via the 1D Coulomb potential. For this system we show that in equilibrium the charges settle, on an average, uniformly and symmetrically on a finite region centred around the origin. We study the statistics of the position of the rightmost particle and show that the limiting distribution describing its typical fluctuations is different from the Tracy–Widom distribution found in the 1D log-gas. We also compute the large deviation functions which characterise the atypical fluctuations of far away from its mean value. In addition, we study the gap between the two rightmost particles as well as the index N + , i.e. the number of particles on the positive semi-axis. We compute the limiting distributions associated to the typical fluctuations of these observables as well as the corresponding large deviation functions. We provide numerical supports to our analytical predictions. Part of these results were announced in a recent letter, Dhar et al (2017 Phys. Rev. Lett. 119 060601).

  4. Weibull Modulus Estimated by the Non-linear Least Squares Method: A Solution to Deviation Occurring in Traditional Weibull Estimation

    NASA Astrophysics Data System (ADS)

    Li, T.; Griffiths, W. D.; Chen, J.

    2017-11-01

    The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws ( i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.

  5. Models of Lift and Drag Coefficients of Stalled and Unstalled Airfoils in Wind Turbines and Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    2008-01-01

    Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.

  6. A model of curved saccade trajectories: spike rate adaptation in the brainstem as the cause of deviation away.

    PubMed

    Kruijne, Wouter; Van der Stigchel, Stefan; Meeter, Martijn

    2014-03-01

    The trajectory of saccades to a target is often affected whenever there is a distractor in the visual field. Distractors can cause a saccade to deviate towards their location or away from it. The oculomotor mechanisms that produce deviation towards distractors have been thoroughly explored in behavioral, neurophysiological and computational studies. The mechanisms underlying deviation away, on the other hand, remain unclear. Behavioral findings suggest a mechanism of spatially focused, top-down inhibition in a saccade map, and deviation away has become a tool to investigate such inhibition. However, this inhibition hypothesis has little neuroanatomical or neurophysiological support, and recent findings go against it. Here, we propose that deviation away results from an unbalanced saccade drive from the brainstem, caused by spike rate adaptation in brainstem long-lead burst neurons. Adaptation to stimulation in the direction of the distractor results in an unbalanced drive away from it. An existing model of the saccade system was extended with this theory. The resulting model simulates a wide range of findings on saccade trajectories, including findings that have classically been interpreted to support inhibition views. Furthermore, the model replicated the effect of saccade latency on deviation away, but predicted this effect would be absent with large (400 ms) distractor-target onset asynchrony. This prediction was confirmed in an experiment, which demonstrates that the theory both explains classical findings on saccade trajectories and predicts new findings. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Adequate margins for random setup uncertainties in head-and-neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Astreinidou, Eleftheria; Bel, Arjan; Raaijmakers, Cornelis P.J.

    2005-03-01

    Purpose: To investigate the effect of random setup uncertainties on the highly conformal dose distributions produced by intensity-modulated radiotherapy (IMRT) for clinical head-and-neck cancer patients and to determine adequate margins to account for those uncertainties. Methods and materials: We have implemented in our clinical treatment planning system the possibility of simulating normally distributed patient setup displacements, translations, and rotations. The planning CT data of 8 patients with Stage T1-T3N0M0 oropharyngeal cancer were used. The clinical target volumes of the primary tumor (CTV{sub primary}) and of the lymph nodes (CTV{sub elective}) were expanded by 0.0, 1.5, 3.0, and 5.0 mm inmore » all directions, creating the planning target volumes (PTVs). We performed IMRT dose calculation using our class solution for each PTV margin, resulting in the conventional static plans. Then, the system recalculated the plan for each positioning displacement derived from a normal distribution with {sigma} = 2 mm and {sigma} = 4 mm (standard deviation) for translational deviations and {sigma} = 1 deg for rotational deviations. The dose distributions of the 30 fractions were summed, resulting in the actual plan. The CTV dose coverage of the actual plans was compared with that of the static plans. Results: Random translational deviations of {sigma} = 2 mm and rotational deviations of {sigma} = 1 deg did not affect the CTV{sub primary} volume receiving 95% of the prescribed dose (V{sub 95}) regardless of the PTV margin used. A V{sub 95} reduction of 3% and 1% for a 0.0-mm and 1.5-mm PTV margin, respectively, was observed for {sigma} = 4 mm. The V{sub 95} of the CTV{sub elective} contralateral was approximately 1% and 5% lower than that of the static plan for {sigma} = 2 mm and {sigma} = 4 mm, respectively, and for PTV margins < 5.0 mm. An additional reduction of 1% was observed when rotational deviations were included. The same effect was observed for the CTV{sub elective} ipsilateral but with smaller dose differences than those for the contralateral side. The effect of the random uncertainties on the mean dose to the parotid glands was not significant. The maximal dose to the spinal cord increased by a maximum of 3 Gy. Conclusions: The margins to account for random setup uncertainties, in our clinical IMRT solution, should be 1.5 mm and 3.0 mm in the case of {sigma} = 2 mm and {sigma} = 4 mm, respectively, for the CTV{sub primary}. Larger margins (5.0 mm), however, should be applied to the CTV{sub elective}, if the goal of treatment is a V{sub 95} value of at least 99%.« less

  8. Ethiopia Adolescents’ Attitudes and Expectations Deviate from Current Infant and Young Child Feeding Recommendations4

    PubMed Central

    Hadley, Craig; Lindstrom, David; Belachew, Tefera; Tessema, Fasil

    2008-01-01

    Purpose Sub-optimal infant and child feeding practices are highly prevalent in many developing countries for reasons that are not entirely understood. Taking an anthropological perspective, we assess whether nulliparous youth have formulated attitudes and expectations in the domain of infant and child feeding behaviors, the extent to which these varied by location and gender, and the extent to which they deviated from current international recommendations. Methods A population-based sample of 2077 adolescent girls and boys (13–17 years) in southwest Ethiopia answered a questionnaire on infant and young child feeding behaviors. Results Results indicate high levels of agreement among adolescents on items relating to infant and young child feeding behaviors. Attitudes and intentions deviated widely from current international recommendations. Youth overwhelmingly endorsed items related to early introduction of non-breast milk liquids and foods. For girls, fewer than 11% agreed that a 5 month infant should be exclusively breastfed and only 26% agreed that a 6 month infant should be consuming some animal source foods. Few sex differences emerged and youth responses matched larger community patterns. Conclusions The results indicate that attitudes and expectations deviate widely from current international child feeding guidelines among soon to be parents. To the extent that youth models are directive, these findings suggest that youth enter into parenthood with suboptimal information about infant and child feeding. Such information will reproduce poor health across generations as the largest cohort of adolescents ever become parents. These results suggest specific points of entry for adolescent nutrition education interventions. PMID:18710680

  9. Simulated laser fluorosensor signals from subsurface chlorophyll distributions

    NASA Technical Reports Server (NTRS)

    Venable, D. D.; Khatun, S.; Punjabi, A.; Poole, L.

    1986-01-01

    A semianalytic Monte Carlo model has been used to simulate laser fluorosensor signals returned from subsurface distributions of chlorophyll. This study assumes the only constituent of the ocean medium is the common coastal zone dinoflagellate Prorocentrum minimum. The concentration is represented by Gaussian distributions in which the location of the distribution maximum and the standard deviation are variable. Most of the qualitative features observed in the fluorescence signal for total chlorophyll concentrations up to 1.0 microg/liter can be accounted for with a simple analytic solution assuming a rectangular chlorophyll distribution function.

  10. Perception of midline deviations in smile esthetics by laypersons

    PubMed Central

    Ferreira, Jamille Barros; da Silva, Licínio Esmeraldo; Caetano, Márcia Tereza de Oliveira; da Motta, Andrea Fonseca Jardim; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson

    2016-01-01

    ABSTRACT Objective: To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. Methods: An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student’s t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Results: Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Conclusions: Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation. PMID:28125140

  11. Current density and catalyst-coated membrane resistance distribution of hydro-formed metallic bipolar plate fuel cell short stack with 250 cm2 active area

    NASA Astrophysics Data System (ADS)

    Haase, S.; Moser, M.; Hirschfeld, J. A.; Jozwiak, K.

    2016-01-01

    An automotive fuel cell with an active area of 250 cm2 is investigated in a 4-cell short stack with a current and temperature distribution device next to the bipolar plate with 560 current and 140 temperature segments. The electrical conductivities of the bipolar plate and gas diffusion layer assembly are determined ex-situ with this current scan shunt module. The applied fuel cell consists of bipolar plates constructed of 75-μm-thick, welded stainless-steel foils and a graphitic coating. The electrical conductivities of the bipolar plate and gas diffusion layer assembly are determined ex-situ with this module with a 6% deviation in in-plane conductivity. The current density distribution is evaluated up to 2.4 A cm-2. The entire cell's investigated volumetric power density is 4.7 kW l-1, and its gravimetric power density is 4.3 kW kg-1 at an average cell voltage of 0.5 V. The current density distribution is determined without influencing the operating cell. In addition, the current density distribution in the catalyst-coated membrane and its effective resistivity distribution with a finite volume discretisation of Ohm's law are evaluated. The deviation between the current density distributions in the catalyst-coated membrane and the bipolar plate is determined.

  12. Influence of the nucleus area distribution on the survival fraction after charged particles broad beam irradiation.

    PubMed

    Wéra, A-C; Barazzuol, L; Jeynes, J C G; Merchant, M J; Suzuki, M; Kirkby, K J

    2014-08-07

    It is well known that broad beam irradiation with heavy ions leads to variation in the number of hit(s) received by each cell as the distribution of particles follows the Poisson statistics. Although the nucleus area will determine the number of hit(s) received for a given dose, variation amongst its irradiated cell population is generally not considered. In this work, we investigate the effect of the nucleus area's distribution on the survival fraction. More specifically, this work aims to explain the deviation, or tail, which might be observed in the survival fraction at high irradiation doses. For this purpose, the nucleus area distribution was added to the beam Poisson statistics and the Linear-Quadratic model in order to fit the experimental data. As shown in this study, nucleus size variation, and the associated Poisson statistics, can lead to an upward survival trend after broad beam irradiation. The influence of the distribution parameters (mean area and standard deviation) was studied using a normal distribution, along with the Linear-Quadratic model parameters (α and β). Finally, the model proposed here was successfully tested to the survival fraction of LN18 cells irradiated with a 85 keV µm(- 1) carbon ion broad beam for which the distribution in the area of the nucleus had been determined.

  13. Non-specific filtering of beta-distributed data.

    PubMed

    Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D

    2014-06-19

    Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered.

  14. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  15. Testing ecological and universal models of body shape and child health using a global sample of infants and young children.

    PubMed

    Hadley, Craig; Hruschka, Daniel J

    2017-11-01

    To test whether a risk of child illness is best predicted by deviations from a population-specific growth distribution or a universal growth distribution. Child weight for height and child illness data from 433 776 children (1-59 months) from 47 different low and lower income countries are used in regression models to estimate for each country the child basal weight for height. This study assesses the extent to which individuals within populations deviate from their basal slenderness. It uses correlation and regression techniques to estimate the relationship between child illness (diarrhoea, fever or cough) and basal weight for height, and residual weight for height. In bivariate tests, basal weight for height z-score did not predict the country level prevalence of child illness (r 2  = -0.01, n = 47, p = 0.53), but excess weight for height did (r 2  = 0.14, p < 0.01). At the individual level, household wealth is negatively associated with the odds that a child is reported as ill (beta = -0.04, p < 0.001, n = 433 776) and basal weight for height was not (beta = 0.20, p = 0.27). Deviations from country-specific basal weight for height were negatively associated with the likelihood of illness (beta = -0.13, p < 0.01), indicating a 13% reduction in illness risk for every 0.1 standard deviation increase in residual weight-for-height Conclusion: These results are consistent with the idea that populations may differ in their body slenderness, and that deviations from this body form may predict the risk of childhood illness.

  16. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  17. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  18. Evaluation of True Power Luminous Efficiency from Experimental Luminance Values

    NASA Astrophysics Data System (ADS)

    Tsutsui, Tetsuo; Yamamato, Kounosuke

    1999-05-01

    A method for obtaining true external power luminous efficiencyfrom experimentally obtained luminance in organic light-emittingdiodes (LEDs) wasdemonstrated. Conventional two-layer organic LEDs with different electron-transport layer thicknesses wereprepared. Spatial distributions of emission intensities wereobserved. The large deviation in both emission spectra and spatialemission patterns were observed when the electron-transport layerthickness was varied. The deviation of emission patterns from thestandard Lambertian pattern was found to cause overestimations ofpower luminous efficiencies as large as 30%. A method for evaluatingcorrection factors was proposed.

  19. Synthesis and characteristics of polyarylene ether sulfones

    NASA Technical Reports Server (NTRS)

    Viswanathan, R.; Johnson, B. C.; Ward, T. C.; Mcgrath, J. E.

    1981-01-01

    A method utilizing potassium carbonate/dimethyl acetamide, as base and solvent respectively, was used for the synthesis of several homopolymers and copolymers derived from various bisphenols. It is demonstrated that this method deviates from simple second order kinetics; this deviation being due to the heterogeneous nature of the reaction. Also, it is shown that a liquid induced crystallization process can improve the solvent resistance of these polymers. Finally, a Monte Carlo simulation of the triad distribution of monomers in nonequilibrium copolycondensation is discussed.

  20. Statistical physics approaches to financial fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong

    2009-12-01

    Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation analysis (DFA) method, we show long-term auto-correlations in these volatility time series. We also analyze return, the actual price changes of stocks, and find that the returns over the two sessions are often anti-correlated.

  1. SDSS-IV MaNGA: a distinct mass distribution explored in slow-rotating early-type galaxies

    NASA Astrophysics Data System (ADS)

    Rong, Yu; Li, Hongyu; Wang, Jie; Gao, Liang; Li, Ran; Ge, Junqiang; Jing, Yingjie; Pan, Jun; Fernández-Trincado, J. G.; Valenzuela, Octavio; Ortíz, Erik Aquino

    2018-06-01

    We study the radial acceleration relation (RAR) for early-type galaxies (ETGs) in the SDSS MaNGA MPL5 data set. The complete ETG sample show a slightly offset RAR from the relation reported by McGaugh et al. (2016) at the low-acceleration end; we find that the deviation is due to the fact that the slow rotators show a systematically higher acceleration relation than the McGaugh's RAR, while the fast rotators show a consistent acceleration relation to McGaugh's RAR. There is a 1σ significant difference between the acceleration relations of the fast and slow rotators, suggesting that the acceleration relation correlates with the galactic spins, and that the slow rotators may have a different mass distribution compared with fast rotators and late-type galaxies. We suspect that the acceleration relation deviation of slow rotators may be attributed to more galaxy merger events, which would disrupt the original spins and correlated distributions of baryons and dark matter orbits in galaxies.

  2. Statistical Characteristics of the Gaussian-Noise Spikes Exceeding the Specified Threshold as Applied to Discharges in a Thundercloud

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.

    2017-12-01

    We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.

  3. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  4. Similarity Measures for Protein Ensembles

    PubMed Central

    Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper

    2009-01-01

    Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations. However, instead of examining individual conformations it is in many cases more relevant to analyse ensembles of conformations that have been obtained either through experiments or from methods such as molecular dynamics simulations. We here present three approaches that can be used to compare conformational ensembles in the same way as the root mean square deviation is used to compare individual pairs of structures. The methods are based on the estimation of the probability distributions underlying the ensembles and subsequent comparison of these distributions. We first validate the methods using a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single-molecule refinement. PMID:19145244

  5. Optoelectronic frequency discriminated phase tuning technology and its applications

    NASA Astrophysics Data System (ADS)

    Lin, Gong-Ru; Chang, Yung-Cheng

    2000-07-01

    By using a phase-tunable optoelectronic phase-locked loop, we are able to continuously change the phase as well as the delay-time of optically distributed microwave clock signals or optical pulse train. The advantages of the proposed technique include such as wide-band operation up to 20GHz, wide-range tuning up to 640 degrees, high tuning resolution of <6x10-2 degree/mV, ultra-low short-term phase fluctuation and drive of 4.7x10-2 degree and 3.4x10- 3 degree/min, good linearity with acceptable deviations, and frequency-independent transferred function with slope of nearly 90 degrees/volt, etc. The novel optoelectronic phase shifter is performed by using a DC-voltage controlled, optoelectronic-mixer-based, frequency-down-converted digital phase-locked-loop. The maximum delay-time is continuously tunable up to 3.9 ns for optical pulses repeated at 500 MHz from a gain-switched laser diode. This corresponds to a delay responsivity of about 0.54 ps/mV. The using of the OEPS as being an optoelectronic delay-time controller for optical pulses is demonstrated with temporal resolution of <0.2 ps. Electro-optic sampling of high-frequency microwave signals by using the in-situ delay-time-tunable pulsed laser as a novel optical probe is primarily reported.

  6. Vertebrate Left-Right Asymmetry: What Can Nodal Cascade Gene Expression Patterns Tell Us?

    PubMed Central

    Schweickert, Axel; Ott, Tim; Kurz, Sabrina; Tingler, Melanie; Maerker, Markus; Fuhl, Franziska; Blum, Martin

    2017-01-01

    Laterality of inner organs is a wide-spread characteristic of vertebrates and beyond. It is ultimately controlled by the left-asymmetric activation of the Nodal signaling cascade in the lateral plate mesoderm of the neurula stage embryo, which results from a cilia-driven leftward flow of extracellular fluids at the left-right organizer. This scenario is widely accepted for laterality determination in wildtype specimens. Deviations from this norm come in different flavors. At the level of organ morphogenesis, laterality may be inverted (situs inversus) or non-concordant with respect to the main body axis (situs ambiguus or heterotaxia). At the level of Nodal cascade gene activation, expression may be inverted, bilaterally induced, or absent. In a given genetic situation, patterns may be randomized or predominantly lacking laterality (absence or bilateral activation). We propose that the distributions of patterns observed may be indicative of the underlying molecular defects, with randomizations being primarily caused by defects in the flow-generating ciliary set-up, and symmetrical patterns being the result of impaired flow sensing, on the left, the right, or both sides. This prediction, the reasoning of which is detailed in this review, pinpoints functions of genes whose role in laterality determination have remained obscure. PMID:29367579

  7. Nitrogen isotopic components in the early solar system

    NASA Technical Reports Server (NTRS)

    Kerridge, J. F.

    1994-01-01

    It is quite common to take the terrestrial atmospheric value of (15)N/(14)N (0.00366) as typical of nitrogen in the early solar system, but in fact there is little reason to suppose that this value had a nebula-wide significance. Indeed, it is not clear that there was a unique solar-system-wide (15)N/(14)N ratio, of whatever value. Here we review what is known about the distribution of the nitrogen isotopes among those solar-system objects that have been sampled so far and conclude that those isotopes reveal widespread inhomogeneity in the early solar system. Whether the isotopically distinct primordial components implied by this analysis were solid or gaseous or a mixture of both is not known. The isotopic composition of N in the Earth's mantle is controversial: estimates range from a 1.1 percent depletion in (15)N to a 1.4 percent enrichment. (Isotopic compositions will be expressed throughout as percent deviations from the terrestrial atmospheric value.) The present-day Martian atmosphere is characterized by a value of plus 62 percent but this enrichment in (15)N is attributed to selective loss of (14)N from the Martian exosphere. Modelling of this fractionation leads to an estimated primordial composition similar to the terrestrial atmospheric value, through the precision of this model-dependent result is unclear.

  8. Exploration of Structural and Functional Variations Owing to Point Mutations in α-NAGA.

    PubMed

    Meshach Paul, D; Rajasekaran, R

    2018-03-01

    Schindler disease is a lysosomal storage disorder caused due to deficiency or defective activity of alpha-N-acetylgalactosaminidase (α-NAGA). Mutations in gene encoding α-NAGA cause wide range of diseases, characterized with mild to severe clinical features. Molecular effects of these mutations are yet to be explored in detail. Therefore, this study was focused on four missense mutations of α-NAGA namely, S160C, E325K, R329Q and R329W. Native and mutant structures of α-NAGA were analysed to determine geometrical deviations such as the contours of root mean square deviation, root mean square fluctuation, percentage of residues in allowed regions of Ramachandran plot and solvent accessible surface area, using conformational sampling technique. Additionally, global energy-minimized structures of native and mutants were further analysed to compute their intra-molecular interactions, hydrogen bond dilution and distribution of secondary structure. In addition, docking studies were also performed to determine variations in binding energies between native and mutants. The deleterious effects of mutants were evident due to variations in their active site residues pertaining to spatial conformation and flexibility, comparatively. Hence, variations exhibited by mutants, namely S160C, E325K, R329Q and R329W to that of native, consequently, lead to the detrimental effects causing Schindler disease. This study computationally explains the underlying reasons for the pathogenesis of the disease, thereby aiding future researchers in drug development and disease management.

  9. Comparability of IQ Scores on Five Widely Used Intelligence Tests

    ERIC Educational Resources Information Center

    Hieronymus, A. N.; Stroud, James B.

    1969-01-01

    Attempts to fill research gap on testing by obtaining comparisons of deviation scores, at grade levels four, seven, and ten, from the California Test of Mental Maturity, Henmon-Nelson Tests, and Lorge-Thorndike Intelligence tests. Results tabulated. (CJ)

  10. Bidisperse and polydisperse suspension rheology at large solid fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pednekar, Sidhant; Chun, Jaehun; Morris, Jeffrey F.

    At the same solid volume fraction, bidisperse and polydisperse suspensions display lower viscosities, and weaker normal stress response, compared to monodisperse suspensions. The reduction of viscosity associated with size distribution can be explained by an increase of the maximum flowable, or jamming, solid fraction. In this work, concentrated or "dense" suspensions are simulated under strong shearing, where thermal motion and repulsive forces are negligible, but we allow for particle contact with a mild frictional interaction with interparticle friction coefficient of 0.2. Aspects of bidisperse suspension rheology are first revisited to establish that the approach reproduces established trends; the study ofmore » bidisperse suspensions at size ratios of large to small particle radii (2 to 4) shows that a minimum in the viscosity occurs for zeta slightly above 0.5, where zeta=phi_{large}/phi is the fraction of the total solid volume occupied by the large particles. The simple shear flows of polydisperse suspensions with truncated normal and log normal size distributions, and bidisperse suspensions which are statistically equivalent with these polydisperse cases up to third moment of the size distribution, are simulated and the rheologies are extracted. Prior work shows that such distributions with equivalent low-order moments have similar phi_{m}, and the rheological behaviors of normal, log normal and bidisperse cases are shown to be in close agreement for a wide range of standard deviation in particle size, with standard correlations which are functionally dependent on phi/phi_{m} providing excellent agreement with the rheology found in simulation. The close agreement of both viscosity and normal stress response between bi- and polydisperse suspensions demonstrates the controlling in influence of the maximum packing fraction in noncolloidal suspensions. Microstructural investigations and the stress distribution according to particle size are also presented.« less

  11. Solar wind parameters and magnetospheric coupling studies

    NASA Technical Reports Server (NTRS)

    King, Joseph H.

    1986-01-01

    This paper presents distributions, means, and standard deviations of the fluxes of solar wind protons, momentum, and energy as observed near earth during the solar quiet and active years 1976 and 1979. Distributions of ratios of energies (Alfven Mach number, plasma beta) and distributions of interplanetary magnetic field orientations are also given. Finally, the uncertainties associated with the use of the libration point orbiting ISEE-3 spacecraft as a solar wind monitor are discussed.

  12. Nucleon distribution amplitudes from lattice QCD.

    PubMed

    Göckeler, Meinulf; Horsley, Roger; Kaltenbrunner, Thomas; Nakamura, Yoshifumi; Pleiter, Dirk; Rakow, Paul E L; Schäfer, Andreas; Schierholz, Gerrit; Stüben, Hinnerk; Warkentin, Nikolaus; Zanotti, James M

    2008-09-12

    We calculate low moments of the leading-twist and next-to-leading-twist nucleon distribution amplitudes on the lattice using two flavors of clover fermions. The results are presented in the MS[over ] scheme at a scale of 2 GeV and can be immediately applied in phenomenological studies. We find that the deviation of the leading-twist nucleon distribution amplitude from its asymptotic form is less pronounced than sometimes claimed in the literature.

  13. Using Group Projects to Assess the Learning of Sampling Distributions

    ERIC Educational Resources Information Center

    Neidigh, Robert O.; Dunkelberger, Jake

    2012-01-01

    In an introductory business statistics course, student groups used sample data to compare a set of sample means to the theoretical sampling distribution. Each group was given a production measurement with a population mean and standard deviation. The groups were also provided an excel spreadsheet with 40 sample measurements per week for 52 weeks…

  14. Validating the operational bias and hypothesis of universal exponent in landslide frequency-area distribution.

    PubMed

    Huang, Jr-Chuan; Lee, Tsung-Yu; Teng, Tse-Yang; Chen, Yi-Chin; Huang, Cho-Ying; Lee, Cheing-Tung

    2014-01-01

    The exponent decay in landslide frequency-area distribution is widely used for assessing the consequences of landslides and with some studies arguing that the slope of the exponent decay is universal and independent of mechanisms and environmental settings. However, the documented exponent slopes are diverse and hence data processing is hypothesized for this inconsistency. An elaborated statistical experiment and two actual landslide inventories were used here to demonstrate the influences of the data processing on the determination of the exponent. Seven categories with different landslide numbers were generated from the predefined inverse-gamma distribution and then analyzed by three data processing procedures (logarithmic binning, LB, normalized logarithmic binning, NLB and cumulative distribution function, CDF). Five different bin widths were also considered while applying LB and NLB. Following that, the maximum likelihood estimation was used to estimate the exponent slopes. The results showed that the exponents estimated by CDF were unbiased while LB and NLB performed poorly. Two binning-based methods led to considerable biases that increased with the increase of landslide number and bin width. The standard deviations of the estimated exponents were dependent not just on the landslide number but also on binning method and bin width. Both extremely few and plentiful landslide numbers reduced the confidence of the estimated exponents, which could be attributed to limited landslide numbers and considerable operational bias, respectively. The diverse documented exponents in literature should therefore be adjusted accordingly. Our study strongly suggests that the considerable bias due to data processing and the data quality should be constrained in order to advance the understanding of landslide processes.

  15. Flexner 2.0-Longitudinal Study of Student Participation in a Campus-Wide General Pathology Course for Graduate Students at The University of Arizona.

    PubMed

    Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S

    2016-01-01

    Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.

  16. Flexner 2.0—Longitudinal Study of Student Participation in a Campus-Wide General Pathology Course for Graduate Students at The University of Arizona

    PubMed Central

    Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.

    2016-01-01

    Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783

  17. Searching for Flickering Giants in the Ursa Minor Dwarf Spheroidal Galaxy

    NASA Astrophysics Data System (ADS)

    Montiel, Edward J.; Mighell, K. J.

    2010-01-01

    We present a preliminary analysis of three epochs of archival Hubble Space Telescope (HST) Wide Field Planetary Camera 2 (WFPC2) observations of a single field in the Ursa Minor (UMi) dwarf spheroidal (dSph) galaxy. These observations were obtained in 2000, 2002, and 2004 (GO-7341, GO-8776, GO-2004; PI: Olszewski). We expand upon the work of Mighell and Roederer 2004 who reported the existence of low-amplitude variability in red giant stars in the UMi dSph. We report the 16 brightest point sources (F606W <= 21.5 mag) that we are able to match between all 3 epochs. The 112 observations were analyzed with HSTphot. We tested for variability with a chi-squared statistic that had a softened photometric error where 0.01 mag was added in quadrature to the reported HSTphot photometric error. We find that all 13 stars and 3 probable galaxies exhibit the same phenomenon as described in Mighell and Roederer with peak to peak amplitudes ranging from 54 to 125 mmags on 10 minute timescales. If these objects were not varying, the deviates should be normally distributed. However, we find that the deviates have a standard deviation of 1.4. This leads to three possible conclusions: (1) the observed phenomenon is real, (2) an additional systematic error of 7 mmag needs to be added to account for additional photometric errors (possibly due to dithering), or (3) there was a small instrumental instability with the WFPC2 instrument from 2000 to 2004. E.J.M. was supported by the NOAO/KPNO Research Experience for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program and the Department of Defense ASSURE program through Scientific Program Order No. 13 (AST-0754223) of the Cooperative Agreement No.AST-0132798 between the Association of Universities for Research in Astronomy (AURA) and the NSF.

  18. Deviations of the visual upright in three dimensions in disorders of the brainstem: a clinical exploration.

    PubMed

    Frisén, Lars

    2010-12-01

    Deviations of the subjective visual vertical in the roll or fronto-parallel plane occur commonly in disorders of the brainstem and have been extensively explored. In contrast, little is known about deviations in other directions. The present retrospective study focused on deviations in the pitch (sagittal) direction in 176 patients with a wide variety of disorders. The test task was to set a self-illuminated rod in the apparent upright position, in total darkness. Abnormal results (outside ± 4°) were recorded in 58% of the subjects. Negative (top backward) deviations were the most common, particularly with mass lesions in the pineal region, obstructive hydrocephalus, cerebellar lesions and crowding at the craniocervical junction. Positive and negative deviations were about equally common with focal intra-axial lesions. Negative deviations appeared related to dorsal locations of lesions and vice versa. Normal pressure hydrocephalus, Parkinson's disease and progressive supranuclear palsy were associated with smaller deviations, without a clear directional preponderance, and a larger individual variability. Most subjects lacked overt clinical corollaries. The most common ocular signs were aqueduct syndromes (n = 17) and ocular tilt reactions (n = 12), which were associated with deviations in 47 and 92% of instances, respectively. Subjective corollaries of deviation were never reported, not even by those subjects who showed a dramatic improvement upon resolution of the underlying condition. Deviations were also assessed in roll in a subgroup of 40 patients with focal lesions. Thirty subjects returned abnormal results: 13% in roll, 47% in pitch and 40% in pitch and roll. The direction of roll deviation appeared primarily related to laterality, with clockwise deviations with right-sided lesions and vice versa. All subjects with ocular tilt reactions had combined pitch and roll deviations, implying a common neural substrate. Correlation analyses, geometrical modelling and experimental self-observations indicated that deviations in pitch were attributable to cyclotorsional asymmetries between the eyes. The frequent co-existence of abnormal pitch and roll results implies that the true axis of deviation in focal brainstem disorders commonly falls outside traditional reference planes. The term 'visual upright in three dimensions' is suggested to identify unrestricted measurements, preserving the established term 'visual vertical' for measurements confined to the roll plane. Assessment of the visual upright in three dimensions provides a new, quantitative angle on brainstem disorders. The test appears useful for identifying a ubiquitous yet clinically silent feature of brainstem disease and also for monitoring the evolution of underlying conditions. More detailed explorations appear well motivated.

  19. Exploring signatures of positive selection in pigmentation candidate genes in populations of East Asian ancestry

    PubMed Central

    2013-01-01

    Background Currently, there is very limited knowledge about the genes involved in normal pigmentation variation in East Asian populations. We carried out a genome-wide scan of signatures of positive selection using the 1000 Genomes Phase I dataset, in order to identify pigmentation genes showing putative signatures of selective sweeps in East Asia. We applied a broad range of methods to detect signatures of selection including: 1) Tests designed to identify deviations of the Site Frequency Spectrum (SFS) from neutral expectations (Tajima’s D, Fay and Wu’s H and Fu and Li’s D* and F*), 2) Tests focused on the identification of high-frequency haplotypes with extended linkage disequilibrium (iHS and Rsb) and 3) Tests based on genetic differentiation between populations (LSBL). Based on the results obtained from a genome wide analysis of 25 kb windows, we constructed an empirical distribution for each statistic across all windows, and identified pigmentation genes that are outliers in the distribution. Results Our tests identified twenty genes that are relevant for pigmentation biology. Of these, eight genes (ATRN, EDAR, KLHL7, MITF, OCA2, TH, TMEM33 and TRPM1,) were extreme outliers (top 0.1% of the empirical distribution) for at least one statistic, and twelve genes (ADAM17, BNC2, CTSD, DCT, EGFR, LYST, MC1R, MLPH, OPRM1, PDIA6, PMEL (SILV) and TYRP1) were in the top 1% of the empirical distribution for at least one statistic. Additionally, eight of these genes (BNC2, EGFR, LYST, MC1R, OCA2, OPRM1, PMEL (SILV) and TYRP1) have been associated with pigmentary traits in association studies. Conclusions We identified a number of putative pigmentation genes showing extremely unusual patterns of genetic variation in East Asia. Most of these genes are outliers for different tests and/or different populations, and have already been described in previous scans for positive selection, providing strong support to the hypothesis that recent selective sweeps left a signature in these regions. However, it will be necessary to carry out association and functional studies to demonstrate the implication of these genes in normal pigmentation variation. PMID:23848512

  20. Exploring signatures of positive selection in pigmentation candidate genes in populations of East Asian ancestry.

    PubMed

    Hider, Jessica L; Gittelman, Rachel M; Shah, Tapan; Edwards, Melissa; Rosenbloom, Arnold; Akey, Joshua M; Parra, Esteban J

    2013-07-12

    Currently, there is very limited knowledge about the genes involved in normal pigmentation variation in East Asian populations. We carried out a genome-wide scan of signatures of positive selection using the 1000 Genomes Phase I dataset, in order to identify pigmentation genes showing putative signatures of selective sweeps in East Asia. We applied a broad range of methods to detect signatures of selection including: 1) Tests designed to identify deviations of the Site Frequency Spectrum (SFS) from neutral expectations (Tajima's D, Fay and Wu's H and Fu and Li's D* and F*), 2) Tests focused on the identification of high-frequency haplotypes with extended linkage disequilibrium (iHS and Rsb) and 3) Tests based on genetic differentiation between populations (LSBL). Based on the results obtained from a genome wide analysis of 25 kb windows, we constructed an empirical distribution for each statistic across all windows, and identified pigmentation genes that are outliers in the distribution. Our tests identified twenty genes that are relevant for pigmentation biology. Of these, eight genes (ATRN, EDAR, KLHL7, MITF, OCA2, TH, TMEM33 and TRPM1,) were extreme outliers (top 0.1% of the empirical distribution) for at least one statistic, and twelve genes (ADAM17, BNC2, CTSD, DCT, EGFR, LYST, MC1R, MLPH, OPRM1, PDIA6, PMEL (SILV) and TYRP1) were in the top 1% of the empirical distribution for at least one statistic. Additionally, eight of these genes (BNC2, EGFR, LYST, MC1R, OCA2, OPRM1, PMEL (SILV) and TYRP1) have been associated with pigmentary traits in association studies. We identified a number of putative pigmentation genes showing extremely unusual patterns of genetic variation in East Asia. Most of these genes are outliers for different tests and/or different populations, and have already been described in previous scans for positive selection, providing strong support to the hypothesis that recent selective sweeps left a signature in these regions. However, it will be necessary to carry out association and functional studies to demonstrate the implication of these genes in normal pigmentation variation.

  1. Analysis on detection accuracy of binocular photoelectric instrument optical axis parallelism digital calibration instrument

    NASA Astrophysics Data System (ADS)

    Ying, Jia-ju; Yin, Jian-ling; Wu, Dong-sheng; Liu, Jie; Chen, Yu-dan

    2017-11-01

    Low-light level night vision device and thermal infrared imaging binocular photoelectric instrument are used widely. The maladjustment of binocular instrument ocular axises parallelism will cause the observer the symptom such as dizziness, nausea, when use for a long time. Binocular photoelectric equipment digital calibration instrument is developed for detecting ocular axises parallelism. And the quantitative value of optical axis deviation can be quantitatively measured. As a testing instrument, the precision must be much higher than the standard of test instrument. Analyzes the factors that influence the accuracy of detection. Factors exist in each testing process link which affect the precision of the detecting instrument. They can be divided into two categories, one category is factors which directly affect the position of reticle image, the other category is factors which affect the calculation the center of reticle image. And the Synthesize error is calculated out. And further distribute the errors reasonably to ensure the accuracy of calibration instruments.

  2. Comparison of $$\

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, C.; et al.

    We measure a large set of observables in inclusive charged current muon neutrino scattering on argon with the MicroBooNE liquid argon time projection chamber operating at Fermilab. We evaluate three neutrino interaction models based on the widely used GENIE event generator using these observables. The measurement uses a data set consisting of neutrino interactions with a final state muon candidate fully contained within the MicroBooNE detector. These data were collected in 2016 with the Fermilab Booster Neutrino Beam, which has an average neutrino energy of 800 MeV, using an exposure corresponding to 5e19 protons-on-target. The analysis employs fully automatic eventmore » selection and charged particle track reconstruction and uses a data-driven technique to separate neutrino interactions from cosmic ray background events. We find that GENIE models consistently describe the shapes of a large number of kinematic distributions for fixed observed multiplicity, but we show an indication that the observed multiplicity fractions deviate from GENIE expectations.« less

  3. Study on effective thermal conductivity of silicone/phosphor composite and its size effect by Lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Li, Lan; Zheng, Huai; Yuan, Chao; Hu, Run; Luo, Xiaobing

    2016-12-01

    The silicone/phosphor composite is widely used in light emitting diode (LED) packaging. The composite thermal properties, especially the effective thermal conductivity, strongly influence the LED performance. In this paper, a lattice Boltzmann model was presented to predict the silicone/phosphor composite effective thermal conductivity. Based on the present lattice Boltzmann model, a random generation method was established to describe the phosphor particle distribution in composite. Benchmarks were conducted by comparing the simulation results with theoretical solutions for simple cases. Then the model was applied to analyze the effective thermal conductivity of the silicone/phosphor composite and its size effect. The deviations between simulation and experimental results are <7 %, when the phosphor volume fraction varies from 0.038 to 0.45. The simulation results also indicate that effective thermal conductivity of the composite with larger particles is higher than that with small particles at the same volume fraction. While mixing these two sizes of phosphor particles provides an extra enhancement for the effective thermal conductivity.

  4. Statistical analysis of Hasegawa-Wakatani turbulence

    NASA Astrophysics Data System (ADS)

    Anderson, Johan; Hnat, Bogdan

    2017-06-01

    Resistive drift wave turbulence is a multipurpose paradigm that can be used to understand transport at the edge of fusion devices. The Hasegawa-Wakatani model captures the essential physics of drift turbulence while retaining the simplicity needed to gain a qualitative understanding of this process. We provide a theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent events in Hasegawa-Wakatani turbulence with enforced equipartition of energy in large scale zonal flows, and small scale drift turbulence. We find that for a wide range of adiabatic index values, the stochastic component representing the small scale turbulent eddies of the flow, obtained from the autoregressive integrated moving average model, exhibits super-diffusive statistics, consistent with intermittent transport. The PDFs of large events (above one standard deviation) are well approximated by the Laplace distribution, while small events often exhibit a Gaussian character. Furthermore, there exists a strong influence of zonal flows, for example, via shearing and then viscous dissipation maintaining a sub-diffusive character of the fluxes.

  5. Tests of neutrino interaction models with the MicroBooNE detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rafique, Aleena

    2018-01-01

    I measure a large set of observables in inclusive charged current muon neutrino scattering on argon with the MicroBooNE liquid argon time projection chamber operating at Fermilab. I evaluate three neutrino interaction models based on the widely used GENIE event generator using these observables. The measurement uses a data set consisting of neutrino interactions with a final state muon candidate fully contained within the MicroBooNE detector. These data were collected in 2016 with the Fermilab Booster Neutrino Beam, which has an average neutrino energy ofmore » $800$ MeV, using an exposure corresponding to $$5.0\\times10^{19}$$ protons-on-target. The analysis employs fully automatic event selection and charged particle track reconstruction and uses a data-driven technique to separate neutrino interactions from cosmic ray background events. I find that GENIE models consistently describe the shapes of a large number of kinematic distributions for fixed observed multiplicity, but I show an indication that the observed multiplicity fractions deviate from GENIE expectations.« less

  6. Masseter function and skeletal malocclusion.

    PubMed

    Sciote, J J; Raoul, G; Ferri, J; Close, J; Horton, M J; Rowlerson, A

    2013-04-01

    The aim of this work is to review the relationship between the function of the masseter muscle and the occurrence of malocclusions. An analysis was made of the masseter muscle samples from subjects who underwent mandibular osteotomies. The size and proportion of type-II fibers (fast) decreases as facial height increases. Patients with mandibular asymmetry have more type-II fibers on the side of their deviation. The insulin-like growth factor and myostatin are expressed differently depending on the sex and fiber diameter. These differences in the distribution of fiber types and gene expression of this growth factor may be involved in long-term postoperative stability and require additional investigations. Muscle strength and bone length are two genetically determined factors in facial growth. Myosin 1H (MYOH1) is associated with prognathia in Caucasians. As future objectives, we propose to characterize genetic variations using "Genome Wide Association Studies" data and their relationships with malocclusions. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  7. Quantification of deviations from rationality with heavy tails in human dynamics

    NASA Astrophysics Data System (ADS)

    Maillart, T.; Sornette, D.; Frei, S.; Duebendorfer, T.; Saichev, A.

    2011-05-01

    The dynamics of technological, economic and social phenomena is controlled by how humans organize their daily tasks in response to both endogenous and exogenous stimulations. Queueing theory is believed to provide a generic answer to account for the often observed power-law distributions of waiting times before a task is fulfilled. However, the general validity of the power law and the nature of other regimes remain unsettled. Using anonymized data collected by Google at the World Wide Web level, we identify the existence of several additional regimes characterizing the time required for a population of Internet users to execute a given task after receiving a message. Depending on the under- or over-utilization of time by the population of users and the strength of their response to perturbations, the pure power law is found to be coextensive with an exponential regime (tasks are performed without too much delay) and with a crossover to an asymptotic plateau (some tasks are never performed).

  8. How Molecular Size Impacts RMSD Applications in Molecular Dynamics Simulations.

    PubMed

    Sargsyan, Karen; Grauffel, Cédric; Lim, Carmay

    2017-04-11

    The root-mean-square deviation (RMSD) is a similarity measure widely used in analysis of macromolecular structures and dynamics. As increasingly larger macromolecular systems are being studied, dimensionality effects such as the "curse of dimensionality" (a diminishing ability to discriminate pairwise differences between conformations with increasing system size) may exist and significantly impact RMSD-based analyses. For such large bimolecular systems, whether the RMSD or other alternative similarity measures might suffer from this "curse" and lose the ability to discriminate different macromolecular structures had not been explicitly addressed. Here, we show such dimensionality effects for both weighted and nonweighted RMSD schemes. We also provide a mechanism for the emergence of the "curse of dimensionality" for RMSD from the law of large numbers by showing that the conformational distributions from which RMSDs are calculated become increasingly similar as the system size increases. Our findings suggest the use of weighted RMSD schemes for small proteins (less than 200 residues) and nonweighted RMSD for larger proteins when analyzing molecular dynamics trajectories.

  9. The Hugoniot adiabat of crystalline copper based on molecular dynamics simulation and semiempirical equation of state

    NASA Astrophysics Data System (ADS)

    Gubin, S. A.; Maklashova, I. V.; Mel'nikov, I. N.

    2018-01-01

    The molecular dynamics (MD) method was used for prediction of properties of copper under shock-wave compression and clarification of the melting region of crystal copper. The embedded atom potential was used for the interatomic interaction. Parameters of Hugonoit adiabats of solid and liquid phases of copper calculated by the semiempirical Grüneisen equation of state are consistent with the results of MD simulations and experimental data. MD simulation allows to visualize the structure of cooper on the atomistic level. The analysis of the radial distribution function and the standard deviation by MD modeling allows to predict the melting area behind the shock wave front. These MD simulation data are required to verify the wide-range equation of state of metals. The melting parameters of copper based on MD simulations and semiempirical equations of state are consistent with experimental and theoretical data, including the region of the melting point of copper.

  10. Longitudinal and Cross-Sectional Analyses of Visual Field Progression in Participants of the Ocular Hypertension Treatment Study (OHTS)

    PubMed Central

    Chauhan, Balwantray C; Keltner, John L; Cello, Kim E; Johnson, Chris A; Anderson, Douglas R; Gordon, Mae O; Kass, Michael A

    2014-01-01

    Purpose Visual field progression can be determined by evaluating the visual field by serial examinations (longitudinal analysis), or by a change in classification derived from comparison to age-matched normal data in single examinations (cross-sectional analysis). We determined the agreement between these two approaches in data from the Ocular Hypertension Treatment Study (OHTS). Methods Visual field data from 3088 eyes of 1570 OHTS participants (median follow-up 7 yrs, 15 tests with static automated perimetry) were analysed. Longitudinal analyses were performed with change probability with total and pattern deviation, and cross-sectional analysis with Glaucoma Hemifield Test, Corrected Pattern Standard Deviation, and Mean Deviation. The rates of Mean Deviation and General Height change were compared to estimate the degree of diffuse loss in emerging glaucoma. Results The agreement on progression in longitudinal and cross-sectional analyses ranged from 50% to 61% and remained nearly constant across a wide range of criteria. In contrast, the agreement on absence of progression ranged from 97% to 99.7%, being highest for the stricter criteria. Analyses of pattern deviation were more conservative than total deviation, with a 3 to 5 times lesser incidence of progression. Most participants developing field loss had both diffuse and focal change. Conclusions Despite considerable overall agreement, between 40 to 50% of eyes identified as having progressed with either longitudinal or cross-sectional analyses were identified with only one of the analyses. Because diffuse change is part of early glaucomatous damage, pattern deviation analyses may underestimate progression in patients with ocular hypertension. PMID:21149774

  11. Estimation of Local Bone Loads for the Volume of Interest.

    PubMed

    Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun

    2016-07-01

    Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations.

  12. An intelligent switch with back-propagation neural network based hybrid power system

    NASA Astrophysics Data System (ADS)

    Perdana, R. H. Y.; Fibriana, F.

    2018-03-01

    The consumption of conventional energy such as fossil fuels plays the critical role in the global warming issues. The carbon dioxide, methane, nitrous oxide, etc. could lead the greenhouse effects and change the climate pattern. In fact, 77% of the electrical energy is generated from fossil fuels combustion. Therefore, it is necessary to use the renewable energy sources for reducing the conventional energy consumption regarding electricity generation. This paper presents an intelligent switch to combine both energy resources, i.e., the solar panels as the renewable energy with the conventional energy from the State Electricity Enterprise (PLN). The artificial intelligence technology with the back-propagation neural network was designed to control the flow of energy that is distributed dynamically based on renewable energy generation. By the continuous monitoring on each load and source, the dynamic pattern of the intelligent switch was better than the conventional switching method. The first experimental results for 60 W solar panels showed the standard deviation of the trial at 0.7 and standard deviation of the experiment at 0.28. The second operation for a 900 W of solar panel obtained the standard deviation of the trial at 0.05 and 0.18 for the standard deviation of the experiment. Moreover, the accuracy reached 83% using this method. By the combination of the back-propagation neural network with the observation of energy usage of the load using wireless sensor network, each load can be evenly distributed and will impact on the reduction of conventional energy usage.

  13. A simulation study of nonparametric total deviation index as a measure of agreement based on quantile regression.

    PubMed

    Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael

    2016-01-01

    Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.

  14. Quantitative dual-probe microdialysis: mathematical model and analysis.

    PubMed

    Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles

    2002-04-01

    Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis.

  15. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  16. Image-Based Modeling Reveals Dynamic Redistribution of DNA Damageinto Nuclear Sub-Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costes Sylvain V., Ponomarev Artem, Chen James L.; Nguyen, David; Cucinotta, Francis A.

    2007-08-03

    Several proteins involved in the response to DNA doublestrand breaks (DSB) f orm microscopically visible nuclear domains, orfoci, after exposure to ionizing radiation. Radiation-induced foci (RIF)are believed to be located where DNA damage occurs. To test thisassumption, we analyzed the spatial distribution of 53BP1, phosphorylatedATM, and gammaH2AX RIF in cells irradiated with high linear energytransfer (LET) radiation and low LET. Since energy is randomly depositedalong high-LET particle paths, RIF along these paths should also berandomly distributed. The probability to induce DSB can be derived fromDNA fragment data measured experimentally by pulsed-field gelelectrophoresis. We used this probability in Monte Carlo simulationsmore » topredict DSB locations in synthetic nuclei geometrically described by acomplete set of human chromosomes, taking into account microscope opticsfrom real experiments. As expected, simulations produced DNA-weightedrandom (Poisson) distributions. In contrast, the distributions of RIFobtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) werenon-random. This deviation from the expected DNA-weighted random patterncan be further characterized by "relative DNA image measurements." Thisnovel imaging approach shows that RIF were located preferentially at theinterface between high and low DNA density regions, and were morefrequent than predicted in regions with lower DNA density. The samepreferential nuclear location was also measured for RIF induced by 1 Gyof low-LET radiation. This deviation from random behavior was evidentonly 5 min after irradiation for phosphorylated ATM RIF, while gammaH2AXand 53BP1 RIF showed pronounced deviations up to 30 min after exposure.These data suggest that DNA damage induced foci are restricted to certainregions of the nucleus of human epithelial cells. It is possible that DNAlesions are collected in these nuclear sub-domains for more efficientrepair.« less

  17. Chromosome Model reveals Dynamic Redistribution of DNA Damage into Nuclear Sub-domains

    NASA Technical Reports Server (NTRS)

    Costes, Sylvain V.; Ponomarev, Artem; Chen, James L.; Cucinotta, Francis A.; Barcellos-Hoff, Helen

    2007-01-01

    Several proteins involved in the response to DNA double strand breaks (DSB) form microscopically visible nuclear domains, or foci, after exposure to ionizing radiation. Radiation-induced foci (RIF) are believed to be located where DNA damage is induced. To test this assumption, we analyzed the spatial distribution of 53BP1, phosphorylated ATM and gammaH2AX RIF in cells irradiated with high linear energy transfer (LET) radiation. Since energy is randomly deposited along high-LET particle paths, RIF along these paths should also be randomly distributed. The probability to induce DSB can be derived from DNA fragment data measured experimentally by pulsed-field gel electrophoresis. We used this probability in Monte Carlo simulations to predict DSB locations in synthetic nuclei geometrically described by a complete set of human chromosomes, taking into account microscope optics from real experiments. As expected, simulations produced DNA-weighted random (Poisson) distributions. In contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern can be further characterized by relative DNA image measurements. This novel imaging approach shows that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent in regions with lower density DNA than predicted. This deviation from random behavior was more pronounced within the first 5 min following irradiation for phosphorylated ATM RIF, while gammaH2AX and 53BP1 RIF showed very pronounced deviation up to 30 min after exposure. These data suggest the existence of repair centers in mammalian epithelial cells. These centers would be nuclear sub-domains where DNA lesions would be collected for more efficient repair.

  18. A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.

    2011-11-02

    Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less

  19. Valuing fire planning alternatives in forest restoration: using derived demand to integrate economics with ecological restoration.

    PubMed

    Rideout, Douglas B; Ziesler, Pamela S; Kernohan, Nicole J

    2014-08-01

    Assessing the value of fire planning alternatives is challenging because fire affects a wide array of ecosystem, market, and social values. Wildland fire management is increasingly used to address forest restoration while pragmatic approaches to assessing the value of fire management have yet to be developed. Earlier approaches to assessing the value of forest management relied on connecting site valuation with management variables. While sound, such analysis is too narrow to account for a broad range of ecosystem services. The metric fire regime condition class (FRCC) was developed from ecosystem management philosophy, but it is entirely biophysical. Its lack of economic information cripples its utility to support decision-making. We present a means of defining and assessing the deviation of a landscape from its desired fire management condition by re-framing the fire management problem as one of derived demand. This valued deviation establishes a performance metric for wildland fire management. Using a case study, we display the deviation across a landscape and sum the deviations to produce a summary metric. This summary metric is used to assess the value of alternative fire management strategies on improving the fire management condition toward its desired state. It enables us to identify which sites are most valuable to restore, even when they are in the same fire regime condition class. The case study site exemplifies how a wide range of disparate values, such as watershed, wildlife, property and timber, can be incorporated into a single landscape assessment. The analysis presented here leverages previous research on environmental capital value and non-market valuation by integrating ecosystem management, restoration, and microeconomics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Distribution pattern of reptiles along an eastern Himalayan elevation gradient, India

    NASA Astrophysics Data System (ADS)

    Chettri, Basundhara; Bhupathy, Subramanian; Acharya, Bhoj Kumar

    2010-01-01

    We examined the spatial distribution pattern of reptiles in an eastern Himalayan elevation gradient. The factors governing the distribution have been assessed with emphasis on the mid-domain effect. We surveyed reptiles along the elevation gradient (300-4800 m) of the Teesta valley in Sikkim, Eastern Himalaya, India using time constrained visual encounter survey. A total of 42 species of reptiles were observed during the study, and the species richness peaked at 500-1000 m with no species beyond 3000 m. The observed pattern was consistent with estimated richness, both showing significant negative relation with elevation. Lizards showed a linear decline with elevation, whereas snakes followed a non-linear relation with peak at 500-1000 m. Observed species richness deviated significantly from that predicted by a mid-domain null model. The regression between empirical and simulated richness was not significant for total reptiles as well as lizards and snakes separately. Most species distributed in the high elevation extended towards lower elevation, but low elevation species (around 50%) were restricted below 1000 m. Deviation of empirical from predicted richness indicates that the distributions of reptile species were least governed by geographic hard boundaries. Climatic factors especially temperature explained much variation of reptiles along the Himalayan elevation gradient. Most reptiles were narrowly distributed, especially those found in low elevation indicating the importance of tropical low-land forests in the conservation of reptiles in Eastern Himalayas.

  1. Measurement of the ω → π+π-π0 Dalitz plot distribution

    NASA Astrophysics Data System (ADS)

    Adlarson, P.; Augustyniak, W.; Bardan, W.; Bashkanov, M.; Bergmann, F. S.; Berłowski, M.; Bhatt, H.; Bondar, A.; Büscher, M.; Calén, H.; Ciepał, I.; Clement, H.; Czerwiński, E.; Demmich, K.; Engels, R.; Erven, A.; Erven, W.; Eyrich, W.; Fedorets, P.; Föhl, K.; Fransson, K.; Goldenbaum, F.; Goswami, A.; Grigoryev, K.; Gullström, C.-O.; Heijkenskjöld, L.; Hejny, V.; Hüsken, N.; Jarczyk, L.; Johansson, T.; Kamys, B.; Kemmerling, G.; Khan, F. A.; Khatri, G.; Khoukaz, A.; Khreptak, O.; Kirillov, D. A.; Kistryn, S.; Kleines, H.; Kłos, B.; Krzemień, W.; Kulessa, P.; Kupść, A.; Kuzmin, A.; Lalwani, K.; Lersch, D.; Lorentz, B.; Magiera, A.; Maier, R.; Marciniewski, P.; Mariański, B.; Morsch, H.-P.; Moskal, P.; Ohm, H.; Perez del Rio, E.; Piskunov, N. M.; Prasuhn, D.; Pszczel, D.; Pysz, K.; Pyszniak, A.; Ritman, J.; Roy, A.; Rudy, Z.; Rundel, O.; Sawant, S.; Schadmand, S.; Schätti-Ozerianska, I.; Sefzick, T.; Serdyuk, V.; Shwartz, B.; Sitterberg, K.; Skorodko, T.; Skurzok, M.; Smyrski, J.; Sopov, V.; Stassen, R.; Stepaniak, J.; Stephan, E.; Sterzenbach, G.; Stockhorst, H.; Ströher, H.; Szczurek, A.; Trzciński, A.; Varma, R.; Wolke, M.; Wrońska, A.; Wüstner, P.; Yamamoto, A.; Zabierowski, J.; Zieliński, M. J.; Złomańczuk, J.; Żuprański, P.; Żurek, M.; Kubis, B.; Leupold, S.

    2017-07-01

    Using the production reactions pd →3He ω and pp → ppω, the Dalitz plot distribution for the ω →π+π-π0 decay is studied with the WASA detector at COSY, based on a combined data sample of (4.408 ± 0.042) ×104 events. The Dalitz plot density is parametrised by a product of the P-wave phase space and a polynomial expansion in the normalised polar Dalitz plot variables Z and ϕ. For the first time, a deviation from pure P-wave phase space is observed with a significance of 4.1σ. The deviation is parametrised by a linear term 1 + 2 αZ, with α determined to be + 0.147 ± 0.036, consistent with the expectations of ρ-meson-type final-state interactions of the P-wave pion pairs.

  2. Fast self contained exponential random deviate algorithm

    NASA Astrophysics Data System (ADS)

    Fernández, Julio F.

    1997-03-01

    An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.

  3. Characterization of the inhomogeneous barrier distribution in a Pt/(100)β-Ga2O3 Schottky diode via its temperature-dependent electrical properties

    NASA Astrophysics Data System (ADS)

    Jian, Guangzhong; He, Qiming; Mu, Wenxiang; Fu, Bo; Dong, Hang; Qin, Yuan; Zhang, Ying; Xue, Huiwen; Long, Shibing; Jia, Zhitai; Lv, Hangbing; Liu, Qi; Tao, Xutang; Liu, Ming

    2018-01-01

    β-Ga2O3 is an ultra-wide bandgap semiconductor with applications in power electronic devices. Revealing the transport characteristics of β-Ga2O3 devices at various temperatures is important for improving device performance and reliability. In this study, we fabricated a Pt/β-Ga2O3 Schottky barrier diode with good performance characteristics, such as a low ON-resistance, high forward current, and a large rectification ratio. Its temperature-dependent current-voltage and capacitance-voltage characteristics were measured at various temperatures. The characteristic diode parameters were derived using thermionic emission theory. The ideality factor n was found to decrease from 2.57 to 1.16 while the zero-bias barrier height Φb0 increased from 0.47 V to 1.00 V when the temperature was increased from 125 K to 350 K. This was explained by the Gaussian distribution of barrier height inhomogeneity. The mean barrier height Φ ¯ b0 = 1.27 V and zero-bias standard deviation σ0 = 0.13 V were obtained. A modified Richardson plot gave a Richardson constant A* of 36.02 A.cm-2.K-2, which is close to the theoretical value of 41.11 A.cm-2.K-2. The differences between the barrier heights determined using the capacitance-voltage and current-voltage curves were also in line with the Gaussian distribution of barrier height inhomogeneity.

  4. A new inversion algorithm for HF sky-wave backscatter ionograms

    NASA Astrophysics Data System (ADS)

    Feng, Jing; Ni, Binbin; Lou, Peng; Wei, Na; Yang, Longquan; Liu, Wen; Zhao, Zhengyu; Li, Xue

    2018-05-01

    HF sky-wave backscatter sounding system is capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density. The leading edge (LE) of a backscatter ionogram (BSI) is widely used for ionospheric inversion since it is hardly affected by any factors other than ionospheric electron density. Traditional BSI inversion methods have failed to distinguish LEs associated with different ionospheric layers, and simply utilize the minimum group path of each operating frequency, which generally corresponds to the LE associated with the F2 layer. Consequently, while the inversion results can provide accurate profiles of the F region below the F2 peak, the diagnostics may not be so effective for other ionospheric layers. In order to resolve this issue, we present a new BSI inversion method using LEs associated with different layers, which can further improve the accuracy of electron density distribution, especially the profile of the ionospheric layers below the F2 region. The efficiency of the algorithm is evaluated by computing the mean and the standard deviation of the differences between inverted parameter values and true values obtained from both vertical and oblique incidence sounding. Test results clearly manifest that the method we have developed outputs more accurate electron density profiles due to improvements to acquire the profiles of the layers below the F2 region. Our study can further improve the current BSI inversion methods on the reconstruction of 2-D electron density distribution in a vertical plane aligned with the direction of sounding.

  5. Radar sea reflection for low-e targets

    NASA Astrophysics Data System (ADS)

    Chow, Winston C.; Groves, Gordon W.

    1998-09-01

    Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.

  6. Strain accumulation and rotation in western Oregon and southwestern Washington

    USGS Publications Warehouse

    Svarc, J.L.; Savage, J.C.; Prescott, W.H.; Murray, M.H.

    2002-01-01

    Velocities of 75 geodetic monuments in western Oregon and southwestern Washington extending from the coast to more than 300 km inland have been determined from GPS surveys over the interval 1992-2000. The average standard deviation in each of the horizontal velocity components is ??? 1 mm yr-1. The observed velocity field is approximated by a combination of rigid rotation (Euler vector relative to interior North America: 43. 40??N ?? 0.14??, 119.33??W ?? 0.28??, and 0.822 ?? 0.057?? Myr-1 clockwise; quoted uncertainties are standard deviations), uniform regional strain rate (??EE = -7.4 ?? 1.8, ??EN = -3.4 ?? 1.0, and ??NN = -5.0 ?? 0.8 nstrain yr-1, extension reckoned positive), and a dislocation model representing subduction of the Juan de Fuca plate beneath North America. Subduction south of 44.5??N was represented by a 40-km-wide locked thrust and subduction north of 44.5??N by a 75-km-wide locked thrust.

  7. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  8. Truncated Linear Statistics Associated with the Eigenvalues of Random Matrices II. Partial Sums over Proper Time Delays for Chaotic Quantum Dots

    NASA Astrophysics Data System (ADS)

    Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe

    2017-06-01

    Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p

  9. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    NASA Astrophysics Data System (ADS)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.

  10. Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger

    2018-05-01

    In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.

  11. Range and Energy Straggling in Ion Beam Transport

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tai, Hsiang

    2000-01-01

    A first-order approximation to the range and energy straggling of ion beams is given as a normal distribution for which the standard deviation is estimated from the fluctuations in energy loss events. The standard deviation is calculated by assuming scattering from free electrons with a long range cutoff parameter that depends on the mean excitation energy of the medium. The present formalism is derived by extrapolating Payne's formalism to low energy by systematic energy scaling and to greater depths of penetration by a second-order perturbation. Limited comparisons are made with experimental data.

  12. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  13. [Prediction of cardiac function deviations (ECG data) in the course of permanent cosmonaut's monitoring starting from selection till return to earth after short-duration space flight].

    PubMed

    Kotovskaia, A R; Koloteva, M I; Luk'ianiuk, V Iu; Stepanova, G P; Filatova, L M; Buĭlov, S P; Zhernavkov, A F; Kondratiuk, L L

    2007-01-01

    Analyzed were deviations in cardiac function in 29 cosmonauts with previous aviation and other occupations ranging of 29 to 61 y.o. who made 8- to 30-day space flights (totai number of flights = 34) between 1982 and 2006. The deviations were identified in ECG records collected during clinical selection, clinical physiological examination (CPE) before flight, insertion and deorbit in transport vehicles, and post-flight CPE. Based on the analysis, the cosmonauts were distributed into three groups. The first group (55.2% of the cosmonauts) did not exhibit noticeable shifts and unfavorable trends in ECG at any time of the period of observation. The second group (34.5%) showed some deviations during selection and pre-flight CPE that became more apparent in the period of deorbit and were still present in post-flight ECG records. The third group (10.3%) displayed health-threatening deviations in cardiac function during deorbit. These findings give start to important investigations with the purpose to define permissible medical risks and ensuing establishment and perfection of medical criteria for candidates to cosmonauts with certain health problems.

  14. A microscopic model of the Stokes-Einstein relation in arbitrary dimension.

    PubMed

    Charbonneau, Benoit; Charbonneau, Patrick; Szamel, Grzegorz

    2018-06-14

    The Stokes-Einstein relation (SER) is one of the most robust and widely employed results from the theory of liquids. Yet sizable deviations can be observed for self-solvation, which cannot be explained by the standard hydrodynamic derivation. Here, we revisit the work of Masters and Madden [J. Chem. Phys. 74, 2450-2459 (1981)], who first solved a statistical mechanics model of the SER using the projection operator formalism. By generalizing their analysis to all spatial dimensions and to partially structured solvents, we identify a potential microscopic origin of some of these deviations. We also reproduce the SER-like result from the exact dynamics of infinite-dimensional fluids.

  15. Real-gas effects associated with one-dimensional transonic flow of cryogenic nitrogen

    NASA Technical Reports Server (NTRS)

    Adcock, J. B.

    1976-01-01

    Real gas solutions for one-dimensional isentropic and normal-shock flows of nitrogen were obtained for a wide range of temperatures and pressures. These calculations are compared to ideal gas solutions and are presented in tables. For temperatures (300 K and below) and pressures (1 to 10 atm) that cover those anticipated for transonic cryogenic tunnels, the solutions are analyzed to obtain indications of the magnitude of inviscid flow simulation errors. For these ranges, the maximum deviation of the various isentropic and normal shock parameters from the ideal values is about 1 percent or less, and for most wind tunnel investigations this deviation would be insignificant.

  16. Porter-Thomas distribution in unstable many-body systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volya, Alexander

    We use the continuum shell model approach to explore the resonance width distribution in unstable many-body systems. The single-particle nature of a decay, the few-body character of the interaction Hamiltonian, and the collectivity that emerges in nonstationary systems due to the coupling to the continuum of reaction states are discussed. Correlations between the structures of the parent and daughter nuclear systems in the common Fock space are found to result in deviations of decay width statistics from the Porter-Thomas distribution.

  17. Work probability distribution and tossing a biased coin

    NASA Astrophysics Data System (ADS)

    Saha, Arnab; Bhattacharjee, Jayanta K.; Chakraborty, Sagar

    2011-01-01

    We show that the rare events present in dissipated work that enters Jarzynski equality, when mapped appropriately to the phenomenon of large deviations found in a biased coin toss, are enough to yield a quantitative work probability distribution for the Jarzynski equality. This allows us to propose a recipe for constructing work probability distribution independent of the details of any relevant system. The underlying framework, developed herein, is expected to be of use in modeling other physical phenomena where rare events play an important role.

  18. Wigner time-delay distribution in chaotic cavities and freezing transition.

    PubMed

    Texier, Christophe; Majumdar, Satya N

    2013-06-21

    Using the joint distribution for proper time delays of a chaotic cavity derived by Brouwer, Frahm, and Beenakker [Phys. Rev. Lett. 78, 4737 (1997)], we obtain, in the limit of the large number of channels N, the large deviation function for the distribution of the Wigner time delay (the sum of proper times) by a Coulomb gas method. We show that the existence of a power law tail originates from narrow resonance contributions, related to a (second order) freezing transition in the Coulomb gas.

  19. First-Principles Momentum Dependent Local Ansatz Approach to the Momentum Distribution Function in Iron-Group Transition Metals

    NASA Astrophysics Data System (ADS)

    Kakehashi, Yoshiro; Chandra, Sumal

    2017-03-01

    The momentum distribution function (MDF) bands of iron-group transition metals from Sc to Cu have been investigated on the basis of the first-principles momentum dependent local ansatz wavefunction method. It is found that the MDF for d electrons show a strong momentum dependence and a large deviation from the Fermi-Dirac distribution function along high-symmetry lines of the first Brillouin zone, while the sp electrons behave as independent electrons. In particular, the deviation in bcc Fe (fcc Ni) is shown to be enhanced by the narrow eg (t2g) bands with flat dispersion in the vicinity of the Fermi level. Mass enhancement factors (MEF) calculated from the jump on the Fermi surface are also shown to be momentum dependent. Large mass enhancements of Mn and Fe are found to be caused by spin fluctuations due to d electrons, while that for Ni is mainly caused by charge fluctuations. Calculated MEF are consistent with electronic specific heat data as well as recent angle resolved photoemission spectroscopy data.

  20. Multi-year slant path rain fade statistics at 28.56 and 19.04 GHz for Wallops Island, Virginia

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1979-01-01

    Multiyear rain fade statistics at 28.56 GHz and 19.04 GHz were compiled for the region of Wallops Island, Virginia covering the time periods, 1 April 1977 through 31 March 1978, and 1 September 1978 through 31 August 1979. The 28.56 GHz attenuations were derived by monitoring the beacon signals from the COMSTAR geosynchronous satellite, D sub 2 during the first year, and satellite, D sub 3, during the second year. Although 19.04 GHz beacons exist aboard these satellites, statistics at this frequency were predicted using the 28 GHz fade data, the measured rain rate distribution, and effective path length concepts. The prediction method used was tested against radar derived fade distributions and excellent comparisons were noted. For example, the rms deviations between the predicted and test distributions were less than or equal to 0.2dB or 4% at 19.04 GHz. The average ratio between the 28.56 GHz and 19.04 GHz fades were also derived for equal percentages of time resulting in a factor of 2.1 with a .05 standard deviation.

  1. The geometry of proliferating dicot cells.

    PubMed

    Korn, R W

    2001-02-01

    The distributions of cell size and cell cycle duration were studied in two-dimensional expanding plant tissues. Plastic imprints of the leaf epidermis of three dicot plants, jade (Crassula argentae), impatiens (Impatiens wallerana), and the common begonia (Begonia semperflorens) were made and cell outlines analysed. The average, standard deviation and coefficient of variance (CV = 100 x standard deviation/average) of cell size were determined with the CV of mother cells less than the CV for daughter cells and both are less than that for all cells. An equation was devised as a simple description of the probability distribution of sizes for all cells of a tissue. Cell cycle durations as measured in arbitrary time units were determined by reconstructing the initial and final sizes of cells and they collectively give the expected asymmetric bell-shaped probability distribution. Given the features of unequal cell division (an average of 11.6% difference in size of daughter cells) and the size variation of dividing cells, it appears that the range of cell size is more critically regulated than the size of a cell at any particular time.

  2. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less

  3. Multipath interference test method for distributed amplifiers

    NASA Astrophysics Data System (ADS)

    Okada, Takahiro; Aida, Kazuo

    2005-12-01

    A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.

  4. Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles

    NASA Astrophysics Data System (ADS)

    Kobayashi, Naoki; Yamazaki, Hiroshi

    2018-01-01

    We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.

  5. The wide-range ejector flowmeter: calibrated gas evacuation comprising both high and low gas flows.

    PubMed

    Waaben, J; Brinkløv, M M; Jørgensen, S

    1984-11-01

    The wide-range ejector flowmeter is an active scavenging system applying calibrated gas removal directly to the anaesthetic circuit. The evacuation rate can be adjusted on the flowmeter under visual control using the calibration scale ranging from 200 ml X min-1 to 151 X min-1. The accuracy of the calibration was tested on three ejector flowmeters at 12 different presettings. The percentage deviation from presetting varied from + 18 to - 19.4 per cent. The ejector flowmeter enables the provision of consistent and accurately calibrated extraction of waste gases and is applicable within a wide range of fresh gas flows.

  6. Estimation of spectral distribution of sky radiance using a commercial digital camera.

    PubMed

    Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao

    2016-01-10

    Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.

  7. Parallel Harmony Search Based Distributed Energy Resource Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electricalmore » power distribution systems operation.« less

  8. Analyses and assessments of span wise gust gradient data from NASA B-57B aircraft

    NASA Technical Reports Server (NTRS)

    Frost, Walter; Chang, Ho-Pen; Ringnes, Erik A.

    1987-01-01

    Analysis of turbulence measured across the airfoil of a Cambera B-57 aircraft is reported. The aircraft is instrumented with probes for measuring wind at both wing tips and at the nose. Statistical properties of the turbulence are reported. These consist of the standard deviations of turbulence measured by each individual probe, standard deviations and probability distribution of differences in turbulence measured between probes and auto- and two-point spatial correlations and spectra. Procedures associated with calculations of two-point spatial correlations and spectra utilizing data were addressed. Methods and correction procedures for assuring the accuracy of aircraft measured winds are also described. Results are found, in general, to agree with correlations existing in the literature. The velocity spatial differences fit a Gaussian/Bessel type probability distribution. The turbulence agrees with the von Karman turbulence correlation and with two-point spatial correlations developed from the von Karman correlation.

  9. The variance of length of stay and the optimal DRG outlier payments.

    PubMed

    Felder, Stefan

    2009-09-01

    Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.

  10. New device for accurate measurement of the x-ray intensity distribution of x-ray tube focal spots.

    PubMed

    Doi, K; Fromes, B; Rossmann, K

    1975-01-01

    A new device has been developed with which the focal spot distribution can be measured accurately. The alignment and localization of the focal spot relative to the device are accomplished by adjustment of three micrometer screws in three orthogonal directions and by comparison of red reference light spots with green fluorescent pinhole images at five locations. The standard deviations for evaluating the reproducibility of the adjustments in the horizontal and vertical directions were 0.2 and 0.5 mm, respectively. Measurements were made of the pinhole images as well as of the line-spread functions (LSFs) and modulation transfer functions (MTFs) for an x-ray tube with focal spots of 1-mm and 50-mum nominal size. The standard deviations for the LSF and MTF of the 1-mm focal spot were 0.017 and 0.010, respectively.

  11. A comparison of portfolio selection models via application on ISE 100 index data

    NASA Astrophysics Data System (ADS)

    Altun, Emrah; Tatlidil, Hüseyin

    2013-10-01

    Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.

  12. Are the Stress Drops of Small Earthquakes Good Predictors of the Stress Drops of Larger Earthquakes?

    NASA Astrophysics Data System (ADS)

    Hardebeck, J.

    2017-12-01

    Uncertainty in PSHA could be reduced through better estimates of stress drop for possible future large earthquakes. Studies of small earthquakes find spatial variability in stress drop; if large earthquakes have similar spatial patterns, their stress drops may be better predicted using the stress drops of small local events. This regionalization implies the variance with respect to the local mean stress drop may be smaller than the variance with respect to the global mean. I test this idea using the Shearer et al. (2006) stress drop catalog for M1.5-3.1 events in southern California. I apply quality control (Hauksson, 2015) and remove near-field aftershocks (Wooddell & Abrahamson, 2014). The standard deviation of the distribution of the log10 stress drop is reduced from 0.45 (factor of 3) to 0.31 (factor of 2) by normalizing each event's stress drop by the local mean. I explore whether a similar variance reduction is possible when using the Shearer catalog to predict stress drops of larger southern California events. For catalogs of moderate-sized events (e.g. Kanamori, 1993; Mayeda & Walter, 1996; Boyd, 2017), normalizing by the Shearer catalog's local mean stress drop does not reduce the standard deviation compared to the unmodified stress drops. I compile stress drops of larger events from the literature, and identify 15 M5.5-7.5 earthquakes with at least three estimates. Because of the wide range of stress drop estimates for each event, and the different techniques and assumptions, it is difficult to assign a single stress drop value to each event. Instead, I compare the distributions of stress drop estimates for pairs of events, and test whether the means of the distributions are statistically significantly different. The events divide into 3 categories: low, medium, and high stress drop, with significant differences in mean stress drop between events in the low and the high stress drop categories. I test whether the spatial patterns of the Shearer catalog stress drops can predict the categories of the 15 events. I find that they cannot, rather the large event stress drops are uncorrelated with the local mean stress drop from the Shearer catalog. These results imply that the regionalization of stress drops of small events does not extend to the larger events, at least with current standard techniques of stress drop estimation.

  13. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  14. Plant functional traits improve diversity-based predictions of temporal stability of grassland productivity

    USDA-ARS?s Scientific Manuscript database

    Aboveground net primary productivity (ANPP) varies in response to temporal fluctuations in weather. Temporal stability (mean/standard deviation) of community ANPP may be increased, on average, by increasing plant species richness, but stability also may differ widely at a given richness level imply...

  15. Deletion of internal structured repeats increases the stability of a leucine-rich repeat protein, YopM

    PubMed Central

    Barrick, Doug

    2011-01-01

    Mapping the stability distributions of proteins in their native folded states provides a critical link between structure, thermodynamics, and function. Linear repeat proteins have proven more amenable to this kind of mapping than globular proteins. C-terminal deletion studies of YopM, a large, linear leucine-rich repeat (LRR) protein, show that stability is distributed quite heterogeneously, yet a high level of cooperativity is maintained [1]. Key components of this distribution are three interfaces that strongly stabilize adjacent sequences, thereby maintaining structural integrity and promoting cooperativity. To better understand the distribution of interaction energy around these critical interfaces, we studied internal (rather than terminal) deletions of three LRRs in this region, including one of these stabilizing interfaces. Contrary to our expectation that deletion of structured repeats should be destabilizing, we find that internal deletion of folded repeats can actually stabilize the native state, suggesting that these repeats are destabilizing, although paradoxically, they are folded in the native state. We identified two residues within this destabilizing segment that deviate from the consensus sequence at a position that normally forms a stacked leucine ladder in the hydrophobic core. Replacement of these nonconsensus residues with leucine is stabilizing. This stability enhancement can be reproduced in the context of nonnative interfaces, but it requires an extended hydrophobic core. Our results demonstrate that different LRRs vary widely in their contribution to stability, and that this variation is context-dependent. These two factors are likely to determine the types of rearrangements that lead to folded, functional proteins, and in turn, are likely to restrict the pathways available for the evolution of linear repeat proteins. PMID:21764506

  16. Validating the Operational Bias and Hypothesis of Universal Exponent in Landslide Frequency-Area Distribution

    PubMed Central

    Huang, Jr-Chuan; Lee, Tsung-Yu; Teng, Tse-Yang; Chen, Yi-Chin; Huang, Cho-Ying; Lee, Cheing-Tung

    2014-01-01

    The exponent decay in landslide frequency-area distribution is widely used for assessing the consequences of landslides and with some studies arguing that the slope of the exponent decay is universal and independent of mechanisms and environmental settings. However, the documented exponent slopes are diverse and hence data processing is hypothesized for this inconsistency. An elaborated statistical experiment and two actual landslide inventories were used here to demonstrate the influences of the data processing on the determination of the exponent. Seven categories with different landslide numbers were generated from the predefined inverse-gamma distribution and then analyzed by three data processing procedures (logarithmic binning, LB, normalized logarithmic binning, NLB and cumulative distribution function, CDF). Five different bin widths were also considered while applying LB and NLB. Following that, the maximum likelihood estimation was used to estimate the exponent slopes. The results showed that the exponents estimated by CDF were unbiased while LB and NLB performed poorly. Two binning-based methods led to considerable biases that increased with the increase of landslide number and bin width. The standard deviations of the estimated exponents were dependent not just on the landslide number but also on binning method and bin width. Both extremely few and plentiful landslide numbers reduced the confidence of the estimated exponents, which could be attributed to limited landslide numbers and considerable operational bias, respectively. The diverse documented exponents in literature should therefore be adjusted accordingly. Our study strongly suggests that the considerable bias due to data processing and the data quality should be constrained in order to advance the understanding of landslide processes. PMID:24852019

  17. Quantitative assessment of hit detection and confirmation in single and duplicate high-throughput screenings.

    PubMed

    Wu, Zhijin; Liu, Dongmei; Sui, Yunxia

    2008-02-01

    The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.

  18. Multiplicity and entropy scaling of medium-energy protons emitted in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Abdelsalam, A.; Kamel, S.; Hafiz, M. E.

    2015-10-01

    The behavior and the properties of medium-energy protons with kinetic energies in the range 26 - 400 MeV is derived from measurements of the particle yields and spectra in the final state of relativistic heavy-ion collisions (16O-AgBr interactions at 60 A and 200 A GeV and 32S-AgBr interactions at 3.7 A and 200 A GeV) and their interpretation in terms of the higher order moments. The multiplicity distributions have been fitted well with the Gaussian distribution function. The data are also compared with the predictions of the modified FRITIOF model, showing that the FRITIOF model does not reproduce the trend and the magnitude of the data. Measurements of the ratio of the variance to the mean show that the production of target fragments at high energies cannot be considered as a statistically independent process. However, the deviation of each multiplicity distribution from a Poisson law provides evidence for correlations. The KNO scaling behavior of two types of scaling (Koba-Nielsen-Olesen (KNO) scaling and Hegyi scaling) functions in terms of the multiplicity distribution is investigated. A simplified universal function has been used in each scaling to display the experimental data. An examination of the relationship between the entropy, the average multiplicity, and the KNO function is performed. Entropy production and subsequent scaling in nucleus-nucleus collisions are carried out by analyzing the experimental data over a wide energy range (Dubna and SPS). Interestingly, the data points corresponding to various energies overlap and fall on a single curve, indicating the presence of a kind of entropy scaling.

  19. Quantitative computed tomography of lung parenchyma in patients with emphysema: analysis of higher-density lung regions

    NASA Astrophysics Data System (ADS)

    Lederman, Dror; Leader, Joseph K.; Zheng, Bin; Sciurba, Frank C.; Tan, Jun; Gur, David

    2011-03-01

    Quantitative computed tomography (CT) has been widely used to detect and evaluate the presence (or absence) of emphysema applying the density masks at specific thresholds, e.g., -910 or -950 Hounsfield Unit (HU). However, it has also been observed that subjects with similar density-mask based emphysema scores could have varying lung function, possibly indicating differences of disease severity. To assess this possible discrepancy, we investigated whether density distribution of "viable" lung parenchyma regions with pixel values > -910 HU correlates with lung function. A dataset of 38 subjects, who underwent both pulmonary function testing and CT examinations in a COPD SCCOR study, was assembled. After the lung regions depicted on CT images were automatically segmented by a computerized scheme, we systematically divided the lung parenchyma into different density groups (bins) and computed a number of statistical features (i.e., mean, standard deviation (STD), skewness of the pixel value distributions) in these density bins. We then analyzed the correlations between each feature and lung function. The correlation between diffusion lung capacity (DLCO) and STD of pixel values in the bin of -910HU <= PV < -750HU was -0.43, as compared with a correlation of -0.49 obtained between the post-bronchodilator ratio (FEV1/FVC) measured by the forced expiratory volume in 1 second (FEV1) dividing the forced vital capacity (FVC) and the STD of pixel values in the bin of -1024HU <= PV < -910HU. The results showed an association between the distribution of pixel values in "viable" lung parenchyma and lung function, which indicates that similar to the conventional density mask method, the pixel value distribution features in "viable" lung parenchyma areas may also provide clinically useful information to improve assessments of lung disease severity as measured by lung functional tests.

  20. Abundance and Distribution of Dimethylsulfoniopropionate Degradation Genes and the Corresponding Bacterial Community Structure at Dimethyl Sulfide Hot Spots in the Tropical and Subtropical Pacific Ocean

    PubMed Central

    Suzuki, Shotaro; Omori, Yuko; Wong, Shu-Kuan; Ijichi, Minoru; Kaneko, Ryo; Kameyama, Sohiko; Tanimoto, Hiroshi; Hamasaki, Koji

    2015-01-01

    Dimethylsulfoniopropionate (DMSP) is mainly produced by marine phytoplankton but is released into the microbial food web and degraded by marine bacteria to dimethyl sulfide (DMS) and other products. To reveal the abundance and distribution of bacterial DMSP degradation genes and the corresponding bacterial communities in relation to DMS and DMSP concentrations in seawater, we collected surface seawater samples from DMS hot spot sites during a cruise across the Pacific Ocean. We analyzed the genes encoding DMSP lyase (dddP) and DMSP demethylase (dmdA), which are responsible for the transformation of DMSP to DMS and DMSP assimilation, respectively. The averaged abundance (±standard deviation) of these DMSP degradation genes relative to that of the 16S rRNA genes was 33% ± 12%. The abundances of these genes showed large spatial variations. dddP genes showed more variation in abundances than dmdA genes. Multidimensional analysis based on the abundances of DMSP degradation genes and environmental factors revealed that the distribution pattern of these genes was influenced by chlorophyll a concentrations and temperatures. dddP genes, dmdA subclade C/2 genes, and dmdA subclade D genes exhibited significant correlations with the marine Roseobacter clade, SAR11 subgroup Ib, and SAR11 subgroup Ia, respectively. SAR11 subgroups Ia and Ib, which possessed dmdA genes, were suggested to be the main potential DMSP consumers. The Roseobacter clade members possessing dddP genes in oligotrophic subtropical regions were possible DMS producers. These results suggest that DMSP degradation genes are abundant and widely distributed in the surface seawater and that the marine bacteria possessing these genes influence the degradation of DMSP and regulate the emissions of DMS in subtropical gyres of the Pacific Ocean. PMID:25862229

  1. Evaluation of Thermal Evolution Profiles and Estimation of Kinetic Parameters for Pyrolysis of Coal/Corn Stover Blends Using Thermogravimetric Analysis

    DOE PAGES

    Bhagavatula, Abhijit; Huffman, Gerald; Shah, Naresh; ...

    2014-01-01

    The thermal evolution profiles and kinetic parameters for the pyrolysis of two Montana coals (DECS-38 subbituminous coal and DECS-25 lignite coal), one biomass sample (corn stover), and their blends (10%, 20%, and 30% by weight of corn stover) have been investigated at a heating rate of 5°C/min in an inert nitrogen atmosphere, using thermogravimetric analysis. The thermal evolution profiles of subbituminous coal and lignite coal display only one major peak over a wide temperature distribution, ~152–814°C and ~175–818°C, respectively, whereas the thermal decomposition profile for corn stover falls in a much narrower band than that of the coals, ~226–608°C. Themore » nonlinearity in the evolution of volatile matter with increasing percentage of corn stover in the blends verifies the possibility of synergistic behavior in the blends with subbituminous coal where deviations from the predicted yield ranging between 2% and 7% were observed whereas very little deviations (1%–3%) from predicted yield were observed in blends with lignite indicating no significant interactions with corn stover. In addition, a single first-order reaction model using the Coats-Redfern approximation was utilized to predict the kinetic parameters of the pyrolysis reaction. The kinetic analysis indicated that each thermal evolution profile may be represented as a single first-order reaction. Three temperature regimes were identified for each of the coals while corn stover and the blends were analyzed using two and four temperature regimes, respectively.« less

  2. Evaluation of Thermal Evolution Profiles and Estimation of Kinetic Parameters for Pyrolysis of Coal/Corn Stover Blends Using Thermogravimetric Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhagavatula, Abhijit; Huffman, Gerald; Shah, Naresh

    The thermal evolution profiles and kinetic parameters for the pyrolysis of two Montana coals (DECS-38 subbituminous coal and DECS-25 lignite coal), one biomass sample (corn stover), and their blends (10%, 20%, and 30% by weight of corn stover) have been investigated at a heating rate of 5°C/min in an inert nitrogen atmosphere, using thermogravimetric analysis. The thermal evolution profiles of subbituminous coal and lignite coal display only one major peak over a wide temperature distribution, ~152–814°C and ~175–818°C, respectively, whereas the thermal decomposition profile for corn stover falls in a much narrower band than that of the coals, ~226–608°C. Themore » nonlinearity in the evolution of volatile matter with increasing percentage of corn stover in the blends verifies the possibility of synergistic behavior in the blends with subbituminous coal where deviations from the predicted yield ranging between 2% and 7% were observed whereas very little deviations (1%–3%) from predicted yield were observed in blends with lignite indicating no significant interactions with corn stover. In addition, a single first-order reaction model using the Coats-Redfern approximation was utilized to predict the kinetic parameters of the pyrolysis reaction. The kinetic analysis indicated that each thermal evolution profile may be represented as a single first-order reaction. Three temperature regimes were identified for each of the coals while corn stover and the blends were analyzed using two and four temperature regimes, respectively.« less

  3. Assessment of facial golden proportions among young Japanese women.

    PubMed

    Mizumoto, Yasushi; Deguchi, Toshio; Fong, Kelvin W C

    2009-08-01

    Facial proportions are of interest in orthodontics. The null hypothesis is that there is no difference in golden proportions of the soft-tissue facial balance between Japanese and white women. Facial proportions were assessed by examining photographs of 3 groups of Asian women: group 1, 30 young adult patients with a skeletal Class 1 occlusion; group 2, 30 models; and group 3, 14 popular actresses. Photographic prints or slides were digitized for image analysis. Group 1 subjects had standardized photos taken as part of their treatment. Photos of the subjects in groups 2 and 3 were collected from magazines and other sources and were of varying sizes; therefore, the output image size was not considered. The range of measurement errors was 0.17% to 1.16%. ANOVA was selected because the data set was normally distributed with homogeneous variances. The subjects in the 3 groups showed good total facial proportions. The proportions of the face-height components in group 1 were similar to the golden proportion, which indicated a longer, lower facial height and shorter nose. Group 2 differed from the golden proportion, with a short, lower facial height. Group 3 had golden proportions in all 7 measurements. The proportion of the face width deviated from the golden proportion, indicating a small mouth or wide-set eyes in groups 1 and 2. The null hypothesis was verified in the group 3 actresses in the facial height components. Some measurements in groups 1 and 2 showed different facial proportions that deviated from the golden proportion (ratio).

  4. Simultaneous measurement of dynamic strain and temperature distribution using high birefringence PANDA fiber Bragg grating

    NASA Astrophysics Data System (ADS)

    Zhu, Mengshi; Murayama, Hideaki

    2017-04-01

    New approach in simultaneous measurement of dynamic strain and temperature has been done by using a high birefringence PANDA fiber Bragg grating sensor. By this technique, we have succeeded in discriminating dynamic strain and temperature distribution at the sampling rate of 800 Hz and the spatial resolution of 1 mm. The dynamic distribution of strain and temperature were measured with the deviation of 5mm spatially. In addition, we have designed an experimental setup by which we can apply quantitative dynamic strain and temperature distribution to the fiber under testing without bounding it to a specimen.

  5. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    NASA Astrophysics Data System (ADS)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  6. Evaluation of measurement uncertainty of glucose in clinical chemistry.

    PubMed

    Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y

    2007-04-01

    The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trofimov, A; Carpenter, K; Shih, HA

    Purpose: To quantify daily set-up variations in fractionated proton therapy of ocular melanomas, and to assess the effect on the fidelity of delivered distribution to the plan. Methods: In a typical five-fraction course, daily set-up is achieved by matching the position of fiducial markers in orthogonal radiographs to the images generated by treatment planning program. A patient maintains the required gaze direction voluntarily, without the aid of fixation devices. Confirmation radiographs are acquired to assess intrafractional changes. For this study, daily radiographs were analyzed to determine the daily iso-center position and apparent gaze direction, which were then transferred to themore » planning system to calculate the dose delivered in individual fractions, and accumulated dose for the entire course. Dose-volume metrics were compared between the planned and accumulated distributions for the tumor and organs at risk, for representative cases that varied by location within the ocular globe. Results: The analysis of the first set of cases (3 posterior, 3 transequatorial and 4 anterior tumors) revealed varying dose deviation patterns, depending on the tumor location. For anterior and posterior tumors, the largest dose increases were observed in the lens and ciliary body, while for the equatorial tumors, macula, optic nerve and disk, were most often affected. The iso-center position error was below 1.3 mm (95%-confidence interval), and the standard deviation of daily polar and azimuthal gaze set-up were 1.5 and 3 degrees, respectively. Conclusion: We quantified interfractional and intrafractional set-up variation, and estimated their effect on the delivered dose for representative cases. Current safety margins are sufficient to maintain the target coverage, however, the dose delivered to critical structures often deviates from the plan. The ongoing analysis will further explore the patterns of dose deviation, and may help to identify particular treatment scenarios which are at a higher risk for such deviations.« less

  8. Deformed transition-state theory: Deviation from Arrhenius behavior and application to bimolecular hydrogen transfer reaction rates in the tunneling regime.

    PubMed

    Carvalho-Silva, Valter H; Aquilanti, Vincenzo; de Oliveira, Heibbe C B; Mundim, Kleber C

    2017-01-30

    A formulation is presented for the application of tools from quantum chemistry and transition-state theory to phenomenologically cover cases where reaction rates deviate from Arrhenius law at low temperatures. A parameter d is introduced to describe the deviation for the systems from reaching the thermodynamic limit and is identified as the linearizing coefficient in the dependence of the inverse activation energy with inverse temperature. Its physical meaning is given and when deviation can be ascribed to quantum mechanical tunneling its value is calculated explicitly. Here, a new derivation is given of the previously established relationship of the parameter d with features of the barrier in the potential energy surface. The proposed variant of transition state theory permits comparison with experiments and tests against alternative formulations. Prescriptions are provided and implemented to three hydrogen transfer reactions: CH 4  + OH → CH 3  + H 2 O, CH 3 Cl + OH → CH 2 Cl + H 2 O and H 2  + CN → H + HCN, widely investigated both experimentally and theoretically. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Adaptive Neural Mechanism for Listing’s Law Revealed in Patients with Skew Deviation Caused by Brainstem or Cerebellar Lesion

    PubMed Central

    Fesharaki, Maryam; Karagiannis, Peter; Tweed, Douglas; Sharpe, James A.; Wong, Agnes M. F.

    2016-01-01

    Purpose Skew deviation is a vertical strabismus caused by damage to the otolithic–ocular reflex pathway and is associated with abnormal ocular torsion. This study was conducted to determine whether patients with skew deviation show the normal pattern of three-dimensional eye control called Listing’s law, which specifies the eye’s torsional angle as a function of its horizontal and vertical position. Methods Ten patients with skew deviation caused by brain stem or cerebellar lesions and nine normal control subjects were studied. Patients with diplopia and neurologic symptoms less than 1 month in duration were designated as acute (n = 4) and those with longer duration were classified as chronic (n = 10). Serial recordings were made in the four patients with acute skew deviation. With the head immobile, subjects made saccades to a target that moved between straight ahead and eight eccentric positions, while wearing search coils. At each target position, fixation was maintained for 3 seconds before the next saccade. From the eye position data, the plane of best fit, referred to as Listing’s plane, was fitted. Violations of Listing’s law were quantified by computing the “thickness” of this plane, defined as the SD of the distances to the plane from the data points. Results Both the hypertropic and hypotropic eyes in patients with acute skew deviation violated Listing’s and Donders’ laws—that is, the eyes did not show one consistent angle of torsion in any given gaze direction, but rather an abnormally wide range of torsional angles. In contrast, each eye in patients with chronic skew deviation obeyed the laws. However, in chronic skew deviation, Listing’s planes in both eyes had abnormal orientations. Conclusions Patients with acute skew deviation violated Listing’s law, whereas those with chronic skew deviation obeyed it, indicating that despite brain lesions, neural adaptation can restore Listing’s law so that the neural linkage between horizontal, vertical, and torsional eye position remains intact. Violation of Listing’s and Donders’ laws during fixation arises primarily from torsional drifts, indicating that patients with acute skew deviation have unstable torsional gaze holding that is independent of their horizontal–vertical eye positions. PMID:18172094

  10. Current status of 3D EPID-based in vivo dosimetry in The Netherlands Cancer Institute

    NASA Astrophysics Data System (ADS)

    Mijnheer, B.; Olaciregui-Ruiz, I.; Rozendaal, R.; Spreeuw, H.; van Herk, M.; Mans, A.

    2015-01-01

    3D in vivo dose verification using a-Si EPIDs is performed routinely in our institution for almost all RT treatments. The EPID-based 3D dose distribution is reconstructed using a back-projection algorithm and compared with the planned dose distribution using 3D gamma evaluation. Dose-reconstruction and gamma-evaluation software runs automatically, and deviations outside the alert criteria are immediately available and investigated, in combination with inspection of cone-beam CT scans. The implementation of our 3D EPID- based in vivo dosimetry approach was able to replace pre-treatment verification for more than 90% of the patient treatments. Clinically relevant deviations could be detected for approximately 1 out of 300 patient treatments (IMRT and VMAT). Most of these errors were patient related anatomical changes or deviations from the routine clinical procedure, and would not have been detected by pre-treatment verification. Moreover, 3D EPID-based in vivo dose verification is a fast and accurate tool to assure the safe delivery of RT treatments. It provides clinically more useful information and is less time consuming than pre-treatment verification measurements. Automated 3D in vivo dosimetry is therefore a prerequisite for large-scale implementation of patient-specific quality assurance of RT treatments.

  11. Attacks exploiting deviation of mean photon number in quantum key distribution and coin tossing

    NASA Astrophysics Data System (ADS)

    Sajeed, Shihan; Radchenko, Igor; Kaiser, Sarah; Bourgoin, Jean-Philippe; Pappa, Anna; Monat, Laurent; Legré, Matthieu; Makarov, Vadim

    2015-03-01

    The security of quantum communication using a weak coherent source requires an accurate knowledge of the source's mean photon number. Finite calibration precision or an active manipulation by an attacker may cause the actual emitted photon number to deviate from the known value. We model effects of this deviation on the security of three quantum communication protocols: the Bennett-Brassard 1984 (BB84) quantum key distribution (QKD) protocol without decoy states, Scarani-Acín-Ribordy-Gisin 2004 (SARG04) QKD protocol, and a coin-tossing protocol. For QKD we model both a strong attack using technology possible in principle and a realistic attack bounded by today's technology. To maintain the mean photon number in two-way systems, such as plug-and-play and relativistic quantum cryptography schemes, bright pulse energy incoming from the communication channel must be monitored. Implementation of a monitoring detector has largely been ignored so far, except for ID Quantique's commercial QKD system Clavis2. We scrutinize this implementation for security problems and show that designing a hack-proof pulse-energy-measuring detector is far from trivial. Indeed, the first implementation has three serious flaws confirmed experimentally, each of which may be exploited in a cleverly constructed Trojan-horse attack. We discuss requirements for a loophole-free implementation of the monitoring detector.

  12. Probability distributions of linear statistics in chaotic cavities and associated phase transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivo, Pierpaolo; Majumdar, Satya N.; Bohigas, Oriol

    2010-03-01

    We establish large deviation formulas for linear statistics on the N transmission eigenvalues (T{sub i}) of a chaotic cavity, in the framework of random matrix theory. Given any linear statistics of interest A=SIGMA{sub i=1}{sup N}a(T{sub i}), the probability distribution P{sub A}(A,N) of A generically satisfies the large deviation formula lim{sub N-}>{sub i}nfinity[-2 log P{sub A}(Nx,N)/betaN{sup 2}]=PSI{sub A}(x), where PSI{sub A}(x) is a rate function that we compute explicitly in many cases (conductance, shot noise, and moments) and beta corresponds to different symmetry classes. Using these large deviation expressions, it is possible to recover easily known results and to produce newmore » formulas, such as a closed form expression for v(n)=lim{sub N-}>{sub i}nfinity var(T{sub n}) (where T{sub n}=SIGMA{sub i}T{sub i}{sup n}) for arbitrary integer n. The universal limit v*=lim{sub n-}>{sub i}nfinity v(n)=1/2pibeta is also computed exactly. The distributions display a central Gaussian region flanked on both sides by non-Gaussian tails. At the junction of the two regimes, weakly nonanalytical points appear, a direct consequence of phase transitions in an associated Coulomb gas problem. Numerical checks are also provided, which are in full agreement with our asymptotic results in both real and Laplace space even for moderately small N. Part of the results have been announced by Vivo et al. [Phys. Rev. Lett. 101, 216809 (2008)].« less

  13. Experimental Investigation of Unsteady Shock Wave Turbulent Boundary Layer Interactions About a Blunt Fin

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.; Greber, Isaac

    1997-01-01

    A series of experiments were performed to investigate the effects of Mach number variation on the characteristics of the unsteady shock wave/turbulent boundary layer interaction generated by a blunt fin. A single blunt fin hemicylindrical leading edge diameter size was used in all of the experiments which covered the Mach number range from 2.0 to 5.0. The measurements in this investigation included surface flow visualization, static and dynamic pressure measurements, both on centerline and off-centerline of the blunt fin axis. Surface flow visualization and static pressure measurements showed that the spatial extent of the shock wave/turbulent boundary layer interaction increased with increasing Mach number. The maximum static pressure, normalized by the incoming static pressure, measured at the peak location in the separated flow region ahead of the blunt fin was found to increase with increasing Mach number. The mean and standard deviations of the fluctuating pressure signals from the dynamic pressure transducers were found to collapse to self-similar distributions as a function of the distance perpendicular to the separation line. The standard deviation of the pressure signals showed initial peaked distribution, with the maximum standard deviation point corresponding to the location of the separation line at Mach number 3.0 to 5.0. At Mach 2.0 the maximum standard deviation point was found to occur significantly upstream of the separation line. The intermittency distributions of the separation shock wave motion were found to be self-similar profiles for all Mach numbers. The intermittent region length was found to increase with Mach number and decrease with interaction sweepback angle. For Mach numbers 3.0 to 5.0 the separation line was found to correspond to high intermittencies or equivalently to the downstream locus of the separation shock wave motion. The Mach 2.0 tests, however, showed that the intermittent region occurs significantly upstream of the separation line. Power spectral densities measured in the intermittent regions were found to have self-similar frequency distributions when compared as functions of a Strouhal number for all Mach numbers and interaction sweepback angles. The maximum zero-crossing frequencies were found to correspond with the peak frequencies in the power spectra measured in the intermittent region.

  14. Extremely Nonthermal Monoenergetic Precipitation in the Auroral Acceleration Region: In Situ Observations

    NASA Astrophysics Data System (ADS)

    Hatch, S.; Chaston, C. C.; Labelle, J. W.

    2017-12-01

    We report in situ measurements through the auroral acceleration region that reveal extremely nonthermal monoenergetic electron distributions. These auroral primaries are indicative of source populations in the plasma sheet well described as kappa distributions with κ ≲ 2. We show from observations and modeling how this large deviation from Maxwellian form may modify the acceleration potential required to drive current closure through the auroral ionosphere.

  15. FIBER AND INTEGRATED OPTICS: Detection of the optical anisotropy in KTP:Rb waveguides

    NASA Astrophysics Data System (ADS)

    Buritskiĭ, K. S.; Dianov, Evgenii M.; Maslov, Vladislav A.; Chernykh, V. A.; Shcherbakov, E. A.

    1990-10-01

    The optical characteristics of channel waveguides made of rubidium-activated potassium titanyl phosphate (KTP:Rb) were determined. The refractive index increment of such waveguides was found to exhibit a considerable anisotropy: Δnx / Δnz approx 2. A deviation of the distribution of the refractive index in a channel waveguide from the model distribution was observed for ion-exchange times in excess of 1 h.

  16. Explorations in statistics: the log transformation.

    PubMed

    Curran-Everett, Douglas

    2018-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.

  17. Beyond Word Frequency: Bursts, Lulls, and Scaling in the Temporal Distributions of Words

    PubMed Central

    Altmann, Eduardo G.; Pierrehumbert, Janet B.; Motter, Adilson E.

    2009-01-01

    Background Zipf's discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well. Methodology/Principal Findings By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type – a measure of the logicality of each word – and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage. Conclusions/Significance Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf's law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics. PMID:19907645

  18. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  19. Large Deviations in Weakly Interacting Boundary Driven Lattice Gases

    NASA Astrophysics Data System (ADS)

    van Wijland, Frédéric; Rácz, Zoltán

    2005-01-01

    One-dimensional, boundary-driven lattice gases with local interactions are studied in the weakly interacting limit. The density profiles and the correlation functions are calculated to first order in the interaction strength for zero-range and short-range processes differing only in the specifics of the detailed-balance dynamics. Furthermore, the effective free-energy (large-deviation function) and the integrated current distribution are also found to this order. From the former, we find that the boundary drive generates long-range correlations only for the short-range dynamics while the latter provides support to an additivity principle recently proposed by Bodineau and Derrida.

  20. Effect of laser frequency noise on fiber-optic frequency reference distribution

    NASA Technical Reports Server (NTRS)

    Logan, R. T., Jr.; Lutes, G. F.; Maleki, L.

    1989-01-01

    The effect of the linewidth of a single longitude-mode laser on the frequency stability of a frequency reference transmitted over a single-mode optical fiber is analyzed. The interaction of the random laser frequency deviations with the dispersion of the optical fiber is considered to determine theoretically the effect on the Allan deviation (square root of the Allan variance) of the transmitted frequency reference. It is shown that the magnitude of this effect may determine the limit of the ultimate stability possible for frequency reference transmission on optical fiber, but is not a serious limitation to present system performance.

  1. Lunar brightness temperature from Microwave Radiometers data of Chang'E-1 and Chang'E-2

    NASA Astrophysics Data System (ADS)

    Feng, J.-Q.; Su, Y.; Zheng, L.; Liu, J.-J.

    2011-10-01

    Both of the Chinese lunar orbiter, Chang'E-1 and Chang'E-2 carried Microwave Radiometers (MRM) to obtain the brightness temperature of the Moon. Based on the different characteristics of these two MRMs, modified algorithms of brightness temperature and specific ground calibration parameters were proposed, and the corresponding lunar global brightness temperature maps were made here. In order to analyze the data distributions of these maps, normalization method was applied on the data series. The second channel data with large deviations were rectified, and the reasons of deviations were analyzed in the end.

  2. Gambling as a teaching aid in the introductory physics laboratory

    NASA Astrophysics Data System (ADS)

    Horodynski-Matsushigue, L. B.; Pascholati, P. R.; Vanin, V. R.; Dias, J. F.; Yoneama, M.-L.; Siqueira, P. T. D.; Amaku, M.; Duarte, J. L. M.

    1998-07-01

    Dice throwing is used to illustrate relevant concepts of the statistical theory of uncertainties, in particular the meaning of a limiting distribution, the standard deviation, and the standard deviation of the mean. It is an important part in a sequence of especially programmed laboratory activities, developed for freshmen, at the Institute of Physics of the University of São Paulo. It is shown how this activity is employed within a constructive teaching approach, which aims at a growing understanding of the measuring processes and of the fundamentals of correct statistical handling of experimental data.

  3. Origin of the inertial deviation from Darcy's law: An investigation from a microscopic flow analysis on two-dimensional model structures

    NASA Astrophysics Data System (ADS)

    Agnaou, Mehrez; Lasseux, Didier; Ahmadi, Azita

    2017-10-01

    Inertial flow in porous media occurs in many situations of practical relevance among which one can cite flows in column reactors, in filters, in aquifers, or near wells for hydrocarbon recovery. It is characterized by a deviation from Darcy's law that leads to a nonlinear relationship between the pressure drop and the filtration velocity. In this work, this deviation, also known as the nonlinear, inertial, correction to Darcy's law, which is subject to controversy upon its origin and dependence on the filtration velocity, is studied through numerical simulations. First, the microscopic flow problem was solved computationally for a wide range of Reynolds numbers up to the limit of steady flow within ordered and disordered porous structures. In a second step, the macroscopic characteristics of the porous medium and flow (permeability and inertial correction tensors) that appear in the macroscale model were computed. From these results, different flow regimes were identified: (1) the weak inertia regime where the inertial correction has a cubic dependence on the filtration velocity and (2) the strong inertia (Forchheimer) regime where the inertial correction depends on the square of the filtration velocity. However, the existence and origin of those regimes, which depend also on the microstructure and flow orientation, are still not well understood in terms of their physical interpretations, as many causes have been conjectured in the literature. In the present study, we provide an in-depth analysis of the flow structure to identify the origin of the deviation from Darcy's law. For accuracy and clarity purposes, this is carried out on two-dimensional structures. Unlike the previous studies reported in the literature, where the origin of inertial effects is often identified on a heuristic basis, a theoretical justification is presented in this work. Indeed, a decomposition of the convective inertial term into two components is carried out formally allowing the identification of a correlation between the flow structure and the different inertial regimes. These components correspond to the curvature of the flow streamlines weighted by the local fluid kinetic energy on the one hand and the distribution of the kinetic energy along these lines on the other hand. In addition, the role of the recirculation zones in the occurrence and in the form of the deviation from Darcy's law was thoroughly analyzed. For the porous structures under consideration, it is shown that (1) the kinetic energy lost in the vortices is insignificant even at high filtration velocities and (2) the shape of the flow streamlines induced by the recirculation zones plays an important role in the variation of the flow structure, which is correlated itself to the different flow regimes.

  4. Validation of the thermophysiological model by Fiala for prediction of local skin temperatures

    NASA Astrophysics Data System (ADS)

    Martínez, Natividad; Psikuta, Agnes; Kuklane, Kalev; Quesada, José Ignacio Priego; de Anda, Rosa María Cibrián Ortiz; Soriano, Pedro Pérez; Palmer, Rosario Salvador; Corberán, José Miguel; Rossi, René Michel; Annaheim, Simon

    2016-12-01

    The most complete and realistic physiological data are derived from direct measurements during human experiments; however, they present some limitations such as ethical concerns, time and cost burden. Thermophysiological models are able to predict human thermal response in a wide range of environmental conditions, but their use is limited due to lack of validation. The aim of this work was to validate the thermophysiological model by Fiala for prediction of local skin temperatures against a dedicated database containing 43 different human experiments representing a wide range of conditions. The validation was conducted based on root-mean-square deviation (rmsd) and bias. The thermophysiological model by Fiala showed a good precision when predicting core and mean skin temperature (rmsd 0.26 and 0.92 °C, respectively) and also local skin temperatures for most body sites (average rmsd for local skin temperatures 1.32 °C). However, an increased deviation of the predictions was observed for the forehead skin temperature (rmsd of 1.63 °C) and for the thigh during exercising exposures (rmsd of 1.41 °C). Possible reasons for the observed deviations are lack of information on measurement circumstances (hair, head coverage interference) or an overestimation of the sweat evaporative cooling capacity for the head and thigh, respectively. This work has highlighted the importance of collecting details about the clothing worn and how and where the sensors were attached to the skin for achieving more precise results in the simulations.

  5. Possibilities and limitations of the kinetic plot method in supercritical fluid chromatography.

    PubMed

    De Pauw, Ruben; Desmet, Gert; Broeckhoven, Ken

    2013-08-30

    Although supercritical fluid chromatography (SFC) is becoming a technique of increasing importance in the field of analytical chromatography, methods to compare the performance of SFC-columns and separations in an unbiased way are not fully developed. The present study uses mathematical models to investigate the possibilities and limitations of the kinetic plot method in SFC as this easily allows to investigate a wide range of operating pressures, retention and mobile phase conditions. The variable column length (L) kinetic plot method was further investigated in this work. Since the pressure history is identical for each measurement, this method gives the true kinetic performance limit in SFC. The deviations of the traditional way of measuring the performance as a function of flow rate (fixed back pressure and column length) and the isopycnic method with respect to this variable column length method were investigated under a wide range of operational conditions. It is found that using the variable L method, extrapolations towards other pressure drops are not valid in SFC (deviation of ∼15% for extrapolation from 50 to 200bar pressure drop). The isopycnic method provides the best prediction but its use is limited when operating closer towards critical point conditions. When an organic modifier is used, the predictions are improved for both methods with respect to the variable L method (e.g. deviations decreases from 20% to 2% when 20mol% of methanol is added). Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Laser frequency stabilization using a commercial wavelength meter

    NASA Astrophysics Data System (ADS)

    Couturier, Luc; Nosske, Ingo; Hu, Fachao; Tan, Canzhu; Qiao, Chang; Jiang, Y. H.; Chen, Peng; Weidemüller, Matthias

    2018-04-01

    We present the characterization of a laser frequency stabilization scheme using a state-of-the-art wavelength meter based on solid Fizeau interferometers. For a frequency-doubled Ti-sapphire laser operated at 461 nm, an absolute Allan deviation below 10-9 with a standard deviation of 1 MHz over 10 h is achieved. Using this laser for cooling and trapping of strontium atoms, the wavemeter scheme provides excellent stability in single-channel operation. Multi-channel operation with a multimode fiber switch results in fluctuations of the atomic fluorescence correlated to residual frequency excursions of the laser. The wavemeter-based frequency stabilization scheme can be applied to a wide range of atoms and molecules for laser spectroscopy, cooling, and trapping.

  7. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  8. The influence of the directional energy distribution on the nonlinear dispersion relation in a random gravity wave field

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Tung, C.-C.

    1977-01-01

    The influence of the directional distribution of wave energy on the dispersion relation is calculated numerically using various directional wave spectrum models. The results indicate that the dispersion relation varies both as a function of the directional energy distribution and the direction of propagation of the wave component under consideration. Furthermore, both the mean deviation and the random scatter from the linear approximation increase as the energy spreading decreases. Limited observational data are compared with the theoretical results. The agreement is favorable.

  9. Multiple Coulomb scattering in thin silicon

    NASA Astrophysics Data System (ADS)

    Berger, N.; Buniatyan, A.; Eckert, P.; Förster, F.; Gredig, R.; Kovalenko, O.; Kiehn, M.; Philipp, R.; Schöning, A.; Wiedner, D.

    2014-07-01

    We present a measurement of multiple Coulomb scattering of 1 to 6 GeV/c electrons in thin (50-140 μm) silicon targets. The data were obtained with the EUDET telescope Aconite at DESY and are compared to parametrisations as used in the Geant4 software package. We find good agreement between data and simulation in the scattering distribution width but large deviations in the shape of the distribution. In order to achieve a better description of the shape, a new scattering model based on a Student's t distribution is developed and compared to the data.

  10. Logarithmic amplifiers.

    PubMed

    Gandler, W; Shapiro, H

    1990-01-01

    Logarithmic amplifiers (log amps), which produce an output signal proportional to the logarithm of the input signal, are widely used in cytometry for measurements of parameters that vary over a wide dynamic range, e.g., cell surface immunofluorescence. Existing log amp circuits all deviate to some extent from ideal performance with respect to dynamic range and fidelity to the logarithmic curve; accuracy in quantitative analysis using log amps therefore requires that log amps be individually calibrated. However, accuracy and precision may be limited by photon statistics and system noise when very low level input signals are encountered.

  11. Using type IV Pearson distribution to calculate the probabilities of underrun and overrun of lists of multiple cases.

    PubMed

    Wang, Jihan; Yang, Kai

    2014-07-01

    An efficient operating room needs both little underutilised and overutilised time to achieve optimal cost efficiency. The probabilities of underrun and overrun of lists of cases can be estimated by a well defined duration distribution of the lists. To propose a method of predicting the probabilities of underrun and overrun of lists of cases using Type IV Pearson distribution to support case scheduling. Six years of data were collected. The first 5 years of data were used to fit distributions and estimate parameters. The data from the last year were used as testing data to validate the proposed methods. The percentiles of the duration distribution of lists of cases were calculated by Type IV Pearson distribution and t-distribution. Monte Carlo simulation was conducted to verify the accuracy of percentiles defined by the proposed methods. Operating rooms in John D. Dingell VA Medical Center, United States, from January 2005 to December 2011. Differences between the proportion of lists of cases that were completed within the percentiles of the proposed duration distribution of the lists and the corresponding percentiles. Compared with the t-distribution, the proposed new distribution is 8.31% (0.38) more accurate on average and 14.16% (0.19) more accurate in calculating the probabilities at the 10th and 90th percentiles of the distribution, which is a major concern of operating room schedulers. The absolute deviations between the percentiles defined by Type IV Pearson distribution and those from Monte Carlo simulation varied from 0.20  min (0.01) to 0.43  min (0.03). Operating room schedulers can rely on the most recent 10 cases with the same combination of surgeon and procedure(s) for distribution parameter estimation to plan lists of cases. Values are mean (SEM). The proposed Type IV Pearson distribution is more accurate than t-distribution to estimate the probabilities of underrun and overrun of lists of cases. However, as not all the individual case durations followed log-normal distributions, there was some deviation from the true duration distribution of the lists.

  12. Utilization of Global Reference Atmosphere Model (GRAM) for shuttle entry

    NASA Technical Reports Server (NTRS)

    Joosten, Kent

    1987-01-01

    At high latitudes, dispersions in values of density for the middle atmosphere from the Global Reference Atmosphere Model (GRAM) are observed to be large, particularly in the winter. Trajectories have been run from 28.5 deg to 98 deg. The critical part of the atmosphere for reentry is 250,000 to 270,000 ft. 250,000 ft is the altitude where the shuttle trajectory levels out. For ascending passes the critical region occurs near the equator. For descending entries the critical region is in northern latitudes. The computed trajectory is input to the GRAM, which computes means and deviations of atmospheric parameters at each point along the trajectory. There is little latitude dispersion for the ascending passes; the strongest source of deviations is seasonal; however, very wide seasonal and latitudinal deviations are exhibited for the descending passes at all orbital inclinations. For shuttle operations the problem is control to maintain the correct entry corridor and avoid either aerodynamic skipping or excessive heat loads.

  13. Thin Disk Accretion in the Magnetically-Arrested State

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; McKinney, Jonathan; Reynolds, Christopher S.

    2016-01-01

    Shakura-Sunyaev thin disk theory is fundamental to black hole astrophysics. Though applications of the theory are wide-spread and powerful tools for explaining observations, such as Soltan's argument using quasar power, broadened iron line measurements, continuum fitting, and recently reverberation mapping, a significant large-scale magnetic field causes substantial deviations from standard thin disk behavior. We have used fully 3D general relativistic MHD simulations with cooling to explore the thin (H/R~0.1) magnetically arrested disk (MAD) state and quantify these deviations. This work demonstrates that accumulation of large-scale magnetic flux into the MAD state is possible, and then extends prior numerical studies of thicker disks, allowing us to measure how jet power scales with the disk state, providing a natural explanation of phenomena like jet quenching in the high-soft state of X-ray binaries. We have also simulated thin MAD disks with a misaligned black hole spin axis in order to understand further deviations from thin disk theory that may significantly affect observations.

  14. Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19

    ERIC Educational Resources Information Center

    Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2008-01-01

    Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the…

  15. Valuing a More Rigorous Review of Formative Assessment's Effectiveness

    ERIC Educational Resources Information Center

    Apthorp, Helen; Klute, Mary; Petrites, Tony; Harlacher, Jason; Real, Marianne

    2016-01-01

    Prior reviews of evidence for the impact of formative assessment on student achievement suggest widely different estimates of formative assessment's effectiveness, ranging from 0.40 and 0.70 standard deviations in one review. The purpose of this study is to describe variability in the effectiveness of formative assessment for promoting student…

  16. 40 CFR Appendix B to Part 136 - Definition and Procedure for the Determination of the Method Detection Limit-Revision 1.11

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES... to a wide variety of sample types ranging from reagent (blank) water containing analyte to wastewater... times the standard deviation of replicate instrumental measurements of the analyte in reagent water. (c...

  17. Fractionation in normal tissues: the (α/β)eff concept can account for dose heterogeneity and volume effects.

    PubMed

    Hoffmann, Aswin L; Nahum, Alan E

    2013-10-07

    The simple Linear-Quadratic (LQ)-based Withers iso-effect formula (WIF) is widely used in external-beam radiotherapy to derive a new tumour dose prescription such that there is normal-tissue (NT) iso-effect when changing the fraction size and/or number. However, as conventionally applied, the WIF is invalid unless the normal-tissue response is solely determined by the tumour dose. We propose a generalized WIF (gWIF) which retains the tumour prescription dose, but replaces the intrinsic fractionation sensitivity measure (α/β) by a new concept, the normal-tissue effective fractionation sensitivity, [Formula: see text], which takes into account both the dose heterogeneity in, and the volume effect of, the late-responding normal-tissue in question. Closed-form analytical expressions for [Formula: see text] ensuring exact normal-tissue iso-effect are derived for: (i) uniform dose, and (ii) arbitrary dose distributions with volume-effect parameter n = 1 from the normal-tissue dose-volume histogram. For arbitrary dose distributions and arbitrary n, a numerical solution for [Formula: see text] exhibits a weak dependence on the number of fractions. As n is increased, [Formula: see text] increases from its intrinsic value at n = 0 (100% serial normal-tissue) to values close to or even exceeding the tumour (α/β) at n = 1 (100% parallel normal-tissue), with the highest values of [Formula: see text] corresponding to the most conformal dose distributions. Applications of this new concept to inverse planning and to highly conformal modalities are discussed, as is the effect of possible deviations from LQ behaviour at large fraction sizes.

  18. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    PubMed

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  19. An atlas of monthly mean distributions of GEOSAT sea surface height, SSMI surface wind speed, AVHRR/2 sea surface temperature, and ECMWF surface wind components during 1988

    NASA Technical Reports Server (NTRS)

    Halpern, D.; Zlotnicki, V.; Newman, J.; Brown, O.; Wentz, F.

    1991-01-01

    Monthly mean global distributions for 1988 are presented with a common color scale and geographical map. Distributions are included for sea surface height variation estimated from GEOSAT; surface wind speed estimated from the Special Sensor Microwave Imager on the Defense Meteorological Satellite Program spacecraft; sea surface temperature estimated from the Advanced Very High Resolution Radiometer on NOAA spacecrafts; and the Cartesian components of the 10m height wind vector computed by the European Center for Medium Range Weather Forecasting. Charts of monthly mean value, sampling distribution, and standard deviation value are displayed. Annual mean distributions are displayed.

  20. Multilayer Disk Reduced Interlayer Crosstalk with Wide Disk-Fabrication Margin

    NASA Astrophysics Data System (ADS)

    Hirotsune, Akemi; Miyauchi, Yasushi; Endo, Nobumasa; Onuma, Tsuyoshi; Anzai, Yumiko; Kurokawa, Takahiro; Ushiyama, Junko; Shintani, Toshimichi; Sugiyama, Toshinori; Miyamoto, Harukazu

    2008-07-01

    To reduce interlayer crosstalk caused by the ghost spot which appears in a multilayer optical disk with more than three information layers, a multilayer disk structure which reduces interlayer crosstalk with a wide disk-fabrication margin was proposed in which the backward reflectivity of the information layers is sufficiently low. It was confirmed that the interlayer crosstalk caused by the ghost spot was reduced to less than the crosstalk from the adjacent layer by controlling backward reflectivity. The wide disk-fabrication margin of the proposed disk structure was indicated by experimentally confirming that the tolerance of the maximum deviation of the spacer-layer thickness is four times larger than that in the previous multilayer disk.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen

    Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less

  2. Size distribution and roundness of clasts within pseudotachylytes of the Gangavalli Shear Zone, Salem, Tamil Nadu: An insight into its origin and tectonic significance

    NASA Astrophysics Data System (ADS)

    Behera, Bhuban Mohan; Thirukumaran, V.; Soni, Aishwaraya; Mishra, Prasanta Kumar; Biswal, Tapas Kumar

    2017-06-01

    Gangavalli (Brittle) Shear Zone (Fault) near Attur, Tamil Nadu exposes nearly 50 km long and 1-3 km wide NNE-SSW trending linear belt of cataclasites and pseudotachylyte produced on charnockites of the Southern Granulite Terrane. Pseudotachylytes, as well as the country rock, bear the evidence of conjugate strike slip shearing along NNE-SSW and NW-SE directions, suggesting an N-S compression. The Gangavalli Shear Zone represents the NNE-SSW fault of the conjugate system along which a right lateral shear has produced seismic slip motion giving rise to cataclasites and pseudotachylytes. Pseudotachylytes occur as veins of varying width extending from hairline fracture fills to tens of meters in length. They carry quartz as well as feldspar clasts with sizes of few mm in diameter; the clast sizes show a modified Power law distribution with finer ones (<1000 {\\upmu }m2) deviating from linearity. The shape of the clasts shows a high degree of roundness (>0.4) due to thermal decrepitation. In a large instance, devitrification has occurred producing albitic microlites that suggest the temperature of the pseudotachylyte melt was >1000^{circ }\\hbox {C}. Thus, pseudotachylyte veins act as a proxy to understand the genetic process involved in the evolution of the shear zone and its tectonic settings.

  3. Surface complexation modeling of Cd(II) sorption to montmorillonite, bacteria, and their composite

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Du, Huihui; Huang, Qiaoyun; Cai, Peng; Rong, Xingmin; Feng, Xionghan; Chen, Wenli

    2016-10-01

    Surface complexation modeling (SCM) has emerged as a powerful tool for simulating heavy metal adsorption processes on the surface of soil solid components under different geochemical conditions. The component additivity (CA) approach is one of the strategies that have been widely used in multicomponent systems. In this study, potentiometric titration, isothermal adsorption, zeta potential measurement, and extended X-ray absorption fine-structure (EXAFS) spectra analysis were conducted to investigate Cd adsorption on 2 : 1 clay mineral montmorillonite, on Gram-positive bacteria Bacillus subtilis, and their mineral-organic composite. We developed constant capacitance models of Cd adsorption on montmorillonite, bacterial cells, and mineral-organic composite. The adsorption behavior of Cd on the surface of the composite was well explained by CA-SCM. Some deviations were observed from the model simulations at pH < 5, where the values predicted by the model were lower than the experimental results. The Cd complexes of X2Cd, SOCd+, R-COOCd+, and R-POCd+ were the predominant species on the composite surface over the pH range of 3 to 8. The distribution ratio of the adsorbed Cd between montmorillonite and bacterial fractions in the composite as predicted by CA-SCM closely coincided with the estimated value of EXAFS at pH 6. The model could be useful for the prediction of heavy metal distribution at the interface of multicomponents and their risk evaluation in soils and associated environments.

  4. Testing statistical self-similarity in the topology of river networks

    USGS Publications Warehouse

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  5. Radioactivity levels and heavy metals in the urban soil of Central Serbia.

    PubMed

    Milenkovic, B; Stajic, J M; Gulan, Lj; Zeremski, T; Nikezic, D

    2015-11-01

    Radioactivity concentrations and heavy metal content were measured in soil samples collected from the area of Kragujevac, one of the largest cities in Serbia. The specific activities of (226)Ra, (232)Th, (40)K and (137)Cs in 30 samples were measured by gamma spectrometry using an HPGe semiconductor detector. The average values ± standard deviations were 33.5 ± 8.2, 50.3 ± 10.6, 425.8 ± 75.7 and 40.2 ± 26.3 Bq kg(-1), respectively. The activity concentrations of (226)Ra, (232)Th and (137)Cs have shown normal distribution. The annual effective doses, radium equivalent activities, external hazard indexes and excess lifetime cancer risk were also estimated. A RAD7 device was used for measuring radon exhalation rates from several samples with highest content of (226)Ra. The concentrations of As, Co, Cr, Cu, Mn, Ni, Pb and Zn were measured, as well as their EDTA extractable concentrations. Wide ranges of values were obtained, especially for Cr, Mn, Ni, Pb and Zn. The absence of normal distribution indicates anthropogenic origin of Cr, Ni, Pb and Zn. Correlations between radionuclide activities, heavy metal contents and physicochemical properties of analysed soil were determined by Spearman correlation coefficient. Strong positive correlation between (226)Ra and (232)Th was found.

  6. Informing Estimates of Program Effects for Studies of Mathematics Professional Development Using Teacher Content Knowledge Outcomes.

    PubMed

    Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang

    2016-10-03

    Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.

  7. A NEW DENSITY VARIANCE-MACH NUMBER RELATION FOR SUBSONIC AND SUPERSONIC ISOTHERMAL TURBULENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konstandin, L.; Girichidis, P.; Federrath, C.

    The probability density function of the gas density in subsonic and supersonic, isothermal, driven turbulence is analyzed using a systematic set of hydrodynamical grid simulations with resolutions of up to 1024{sup 3} cells. We perform a series of numerical experiments with root-mean-square (rms) Mach number M ranging from the nearly incompressible, subsonic (M=0.1) to the highly compressible, supersonic (M=15) regime. We study the influence of two extreme cases for the driving mechanism by applying a purely solenoidal (divergence-free) and a purely compressive (curl-free) forcing field to drive the turbulence. We find that our measurements fit the linear relation between themore » rms Mach number and the standard deviation (std. dev.) of the density distribution in a wide range of Mach numbers, where the proportionality constant depends on the type of forcing. In addition, we propose a new linear relation between the std. dev. of the density distribution {sigma}{sub {rho}} and that of the velocity in compressible modes, i.e., the compressible component of the rms Mach number, M{sub comp}. In this relation the influence of the forcing is significantly reduced, suggesting a linear relation between {sigma}{sub {rho}} and M{sub comp}, independent of the forcing, and ranging from the subsonic to the supersonic regime.« less

  8. Dispersion in Rectangular Networks: Effective Diffusivity and Large-Deviation Rate Function

    NASA Astrophysics Data System (ADS)

    Tzella, Alexandra; Vanneste, Jacques

    2016-09-01

    The dispersion of a diffusive scalar in a fluid flowing through a network has many applications including to biological flows, porous media, water supply, and urban pollution. Motivated by this, we develop a large-deviation theory that predicts the evolution of the concentration of a scalar released in a rectangular network in the limit of large time t ≫1 . This theory provides an approximation for the concentration that remains valid for large distances from the center of mass, specifically for distances up to O (t ) and thus much beyond the O (t1 /2) range where a standard Gaussian approximation holds. A byproduct of the approach is a closed-form expression for the effective diffusivity tensor that governs this Gaussian approximation. Monte Carlo simulations of Brownian particles confirm the large-deviation results and demonstrate their effectiveness in describing the scalar distribution when t is only moderately large.

  9. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  10. MUSiC - A general search for deviations from monte carlo predictions in CMS

    NASA Astrophysics Data System (ADS)

    Biallass, Philipp A.; CMS Collaboration

    2009-06-01

    A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.

  11. MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS

    NASA Astrophysics Data System (ADS)

    Hof, Carsten

    2009-05-01

    We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.

  12. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  13. Effects of vegetation canopy structure on remotely sensed canopy temperatures. [inferring plant water stress and yield

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.

    1979-01-01

    The effects of vegetation canopy structure on thermal infrared sensor response must be understood before vegetation surface temperatures of canopies with low percent ground cover can be accurately inferred. The response of a sensor is a function of vegetation geometric structure, the vertical surface temperature distribution of the canopy components, and sensor view angle. Large deviations between the nadir sensor effective radiant temperature (ERT) and vegetation ERT for a soybean canopy were observed throughout the growing season. The nadir sensor ERT of a soybean canopy with 35 percent ground cover deviated from the vegetation ERT by as much as 11 C during the mid-day. These deviations were quantitatively explained as a function of canopy structure and soil temperature. Remote sensing techniques which determine the vegetation canopy temperature(s) from the sensor response need to be studied.

  14. Image-Based Modeling Reveals Dynamic Redistribution of DNA Damage into Nuclear Sub-Domains

    PubMed Central

    Costes, Sylvain V; Ponomarev, Artem; Chen, James L; Nguyen, David; Cucinotta, Francis A; Barcellos-Hoff, Mary Helen

    2007-01-01

    Several proteins involved in the response to DNA double strand breaks (DSB) form microscopically visible nuclear domains, or foci, after exposure to ionizing radiation. Radiation-induced foci (RIF) are believed to be located where DNA damage occurs. To test this assumption, we analyzed the spatial distribution of 53BP1, phosphorylated ATM, and γH2AX RIF in cells irradiated with high linear energy transfer (LET) radiation and low LET. Since energy is randomly deposited along high-LET particle paths, RIF along these paths should also be randomly distributed. The probability to induce DSB can be derived from DNA fragment data measured experimentally by pulsed-field gel electrophoresis. We used this probability in Monte Carlo simulations to predict DSB locations in synthetic nuclei geometrically described by a complete set of human chromosomes, taking into account microscope optics from real experiments. As expected, simulations produced DNA-weighted random (Poisson) distributions. In contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern can be further characterized by “relative DNA image measurements.” This novel imaging approach shows that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent than predicted in regions with lower DNA density. The same preferential nuclear location was also measured for RIF induced by 1 Gy of low-LET radiation. This deviation from random behavior was evident only 5 min after irradiation for phosphorylated ATM RIF, while γH2AX and 53BP1 RIF showed pronounced deviations up to 30 min after exposure. These data suggest that DNA damage–induced foci are restricted to certain regions of the nucleus of human epithelial cells. It is possible that DNA lesions are collected in these nuclear sub-domains for more efficient repair. PMID:17676951

  15. High mycorrhizal specificity in a widespread mycoheterotrophic plant, Eulophia zollingeri (Orchidaceae).

    PubMed

    Ogura-Tsujita, Yuki; Yukawa, Tomohisa

    2008-01-01

    Because mycoheterotrophic plants fully depend on their mycorrhizal partner for their carbon supply, the major limiting factor for the geographic distribution of these plants may be the presence of their mycorrhizal partner. Although this factor may seem to be a disadvantage for increasing geographic distribution, widespread mycoheterotrophic species nonetheless exist. The mechanism causing the wide distribution of some mycoheterotrophic species is, however, seldom discussed. We identified the mycorrhizal partner of a widespread mycoheterotrophic orchid, Eulophia zollingeri, using 12 individuals from seven populations in Japan, Myanmar, and Taiwan by DNA-based methods. All fungal ITS sequences from the roots closely related to those of Psathyrella candolleana (Coprinaceae) from GenBank accessions and herbarium specimens. These results indicate that E. zollingeri is exclusively associated with the P. candolleana species group. Further, the molecular data support the wide distribution and wide-ranging habitat of this fungal partner. Our data provide evidence that a mycoheterotrophic plant can achieve a wide distribution, even though it has a high mycorrhizal specificity, if its fungal partner is widely distributed.

  16. Probabilistic Modeling and Simulation of Metal Fatigue Life Prediction

    DTIC Science & Technology

    2002-09-01

    distribution demonstrate the central limit theorem? Obviously not! This is much the same as materials testing. If only NBA basketball stars are...60 near the exit of a NBA locker room. There would obviously be some pseudo-normal distribution with a very small standard deviation. The mean...completed, the investigators must understand how the midgets and the NBA stars will affect the total solution. D. IT IS MUCH SIMPLER TO MODEL THE

  17. Probability density functions for use when calculating standardised drought indices

    NASA Astrophysics Data System (ADS)

    Svensson, Cecilia; Prosdocimi, Ilaria; Hannaford, Jamie

    2015-04-01

    Time series of drought indices like the standardised precipitation index (SPI) and standardised flow index (SFI) require a statistical probability density function to be fitted to the observed (generally monthly) precipitation and river flow data. Once fitted, the quantiles are transformed to a Normal distribution with mean = 0 and standard deviation = 1. These transformed data are the SPI/SFI, which are widely used in drought studies, including for drought monitoring and early warning applications. Different distributions were fitted to rainfall and river flow data accumulated over 1, 3, 6 and 12 months for 121 catchments in the United Kingdom. These catchments represent a range of catchment characteristics in a mid-latitude climate. Both rainfall and river flow data have a lower bound at 0, as rains and flows cannot be negative. Their empirical distributions also tend to have positive skewness, and therefore the Gamma distribution has often been a natural and suitable choice for describing the data statistically. However, after transformation of the data to Normal distributions to obtain the SPIs and SFIs for the 121 catchments, the distributions are rejected in 11% and 19% of cases, respectively, by the Shapiro-Wilk test. Three-parameter distributions traditionally used in hydrological applications, such as the Pearson type 3 for rainfall and the Generalised Logistic and Generalised Extreme Value distributions for river flow, tend to make the transformed data fit better, with rejection rates of 5% or less. However, none of these three-parameter distributions have a lower bound at zero. This means that the lower tail of the fitted distribution may potentially go below zero, which would result in a lower limit to the calculated SPI and SFI values (as observations can never reach into this lower tail of the theoretical distribution). The Tweedie distribution can overcome the problems found when using either the Gamma or the above three-parameter distributions. The Tweedie is a three-parameter distribution which includes the Gamma distribution as a special case. It is bounded below at zero and has enough flexibility to fit most behaviours observed in the data. It does not always outperform the three-parameter distributions, but the rejection rates are similar. In addition, for certain parameter values the Tweedie distribution has a positive mass at zero, which means that ephemeral streams and months with zero rainfall can be modelled. It holds potential for wider application in drought studies in other climates and types of catchment.

  18. Sleep stage distribution in persons with mild traumatic brain injury: a polysomnographic study according to American Academy of Sleep Medicine standards.

    PubMed

    Mollayeva, Tatyana; Colantonio, Angela; Cassidy, J David; Vernich, Lee; Moineddin, Rahim; Shapiro, Colin M

    2017-06-01

    Sleep stage disruption in persons with mild traumatic brain injury (mTBI) has received little research attention. We examined deviations in sleep stage distribution in persons with mTBI relative to population age- and sex-specific normative data and the relationships between such deviations and brain injury-related, medical/psychiatric, and extrinsic factors. We conducted a cross-sectional polysomnographic investigation in 40 participants diagnosed with mTBI (mean age 47.54 ± 11.30 years; 56% males). At the time of investigation, participants underwent comprehensive clinical and neuroimaging examinations and one full-night polysomnographic study. We used the 2012 American Academy of Sleep Medicine recommendations for recording, scoring, and summarizing sleep stages. We compared participants' sleep stage data with normative data stratified by age and sex to yield z-scores for deviations from available population norms and then employed stepwise multiple regression analyses to determine the factors associated with the identified significant deviations. In patients with mTBI, the mean duration of nocturnal wakefulness was higher and consolidated sleep stage N2 and REM were lower than normal (p < 0.0001, p = 0.018, and p = 0.010, respectively). In multivariate regression analysis, several covariates accounted for the variance in the relative changes in sleep stage duration. No sex differences were observed in the mean proportion of non-REM or REM sleep. We observed longer relative nocturnal wakefulness and shorter relative N2 and REM sleep in patients with mTBI, and these outcomes were associated with potentially modifiable variables. Addressing disruptions in sleep architecture in patients with mTBI could improve their health status. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Interplay between geometry and flow distribution in an airway tree.

    PubMed

    Mauroy, B; Filoche, M; Andrade, J S; Sapoval, B

    2003-04-11

    Uniform flow distribution in a symmetric volume can be realized through a symmetric branched tree. It is shown here, however, by 3D numerical simulation of the Navier-Stokes equations, that the flow partitioning can be highly sensitive to deviations from exact symmetry if inertial effects are present. The flow asymmetry is quantified and found to depend on the Reynolds number. Moreover, for a given Reynolds number, we show that the flow distribution depends on the aspect ratio of the branching elements as well as their angular arrangement. Our results indicate that physiological variability should be severely restricted in order to ensure adequate fluid distribution through a tree.

  20. Integrity modelling of tropospheric delay models

    NASA Astrophysics Data System (ADS)

    Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó

    2017-04-01

    The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual error of tropospheric delays, the mathematical formulation of the overbounding models are currently under development. This study introduces the main findings of the residual error analysis of the studied tropospheric delay models, and discusses the preliminary analysis of the integrity model development for safety-of-life applications.

  1. Performance of a scanning mobility particle sizer in measuring diverse types of airborne nanoparticles: Multi-walled carbon nanotubes, welding fumes, and titanium dioxide spray.

    PubMed

    Chen, Bean T; Schwegler-Berry, Diane; Cumpston, Amy; Cumpston, Jared; Friend, Sherri; Stone, Samuel; Keane, Michael

    2016-07-01

    Direct-reading instruments have been widely used for characterizing airborne nanoparticles in inhalation toxicology and industrial hygiene studies for exposure/risk assessments. Instruments using electrical mobility sizing followed by optical counting, e.g., scanning or sequential mobility particle spectrometers (SMPS), have been considered as the "gold standard" for characterizing nanoparticles. An SMPS has the advantage of rapid response and has been widely used, but there is little information on its performance in assessing the full spectrum of nanoparticles encountered in the workplace. In this study, an SMPS was evaluated for its effectiveness in producing "monodisperse" aerosol and its adequacy in characterizing overall particle size distribution using three test aerosols, each mimicking a unique class of real-life nanoparticles: singlets of nearly spherical titanium dioxide (TiO2), agglomerates of fiber-like multi-walled carbon nanotube (MWCNT), and aggregates that constitutes welding fume (WF). These aerosols were analyzed by SMPS, cascade impactor, and by counting and sizing of discrete particles by scanning and transmission electron microscopy. The effectiveness of the SMPS to produce classified particles (fixed voltage mode) was assessed by examination of the resulting geometric standard deviation (GSD) from the impactor measurement. Results indicated that SMPS performed reasonably well for TiO2 (GSD = 1.3), but not for MWCNT and WF as evidenced by the large GSD values of 1.8 and 1.5, respectively. For overall characterization, results from SMPS (scanning voltage mode) exhibited particle-dependent discrepancies in the size distribution and total number concentration compared to those from microscopic analysis. Further investigation showed that use of a single-stage impactor at the SMPS inlet could distort the size distribution and underestimate the concentration as shown by the SMPS, whereas the presence of vapor molecules or atom clusters in some test aerosols might cause artifacts by counting "phantom particles." Overall, the information obtained from this study will help understand the limitations of the SMPS in measuring nanoparticles so that one can adequately interpret the results for risk assessments and exposure prevention in an occupational or ambient environment.

  2. Entropic effects, shape, and size of mixed micelles formed by copolymers with complex architectures

    NASA Astrophysics Data System (ADS)

    Kalogirou, Andreas; Gergidis, Leonidas N.; Moultos, Othonas; Vlahos, Costas

    2015-11-01

    The entropic effects in the comicellization behavior of amphiphilic A B copolymers differing in the chain size of solvophilic A parts were studied by means of molecular dynamics simulations. In particular, mixtures of miktoarm star copolymers differing in the molecular weight of solvophilic arms were investigated. We found that the critical micelle concentration values show a positive deviation from the analytical predictions of the molecular theory of comicellization for chemically identical copolymers. This can be attributed to the effective interactions between copolymers originated from the arm size asymmetry. The effective interactions induce a very small decrease in the aggregation number of preferential micelles triggering the nonrandom mixing between the solvophilic moieties in the corona. Additionally, in order to specify how the chain architecture affects the size distribution and the shape of mixed micelles we studied star-shaped, H-shaped, and homo-linked-rings-linear mixtures. In the first case the individual constituents form micelles with preferential and wide aggregation numbers and in the latter case the individual constituents form wormlike and spherical micelles.

  3. Entropic effects, shape, and size of mixed micelles formed by copolymers with complex architectures.

    PubMed

    Kalogirou, Andreas; Gergidis, Leonidas N; Moultos, Othonas; Vlahos, Costas

    2015-11-01

    The entropic effects in the comicellization behavior of amphiphilic AB copolymers differing in the chain size of solvophilic A parts were studied by means of molecular dynamics simulations. In particular, mixtures of miktoarm star copolymers differing in the molecular weight of solvophilic arms were investigated. We found that the critical micelle concentration values show a positive deviation from the analytical predictions of the molecular theory of comicellization for chemically identical copolymers. This can be attributed to the effective interactions between copolymers originated from the arm size asymmetry. The effective interactions induce a very small decrease in the aggregation number of preferential micelles triggering the nonrandom mixing between the solvophilic moieties in the corona. Additionally, in order to specify how the chain architecture affects the size distribution and the shape of mixed micelles we studied star-shaped, H-shaped, and homo-linked-rings-linear mixtures. In the first case the individual constituents form micelles with preferential and wide aggregation numbers and in the latter case the individual constituents form wormlike and spherical micelles.

  4. Non-equilibrium Quasi-Chemical Nucleation Model

    NASA Astrophysics Data System (ADS)

    Gorbachev, Yuriy E.

    2018-04-01

    Quasi-chemical model, which is widely used for nucleation description, is revised on the basis of recent results in studying of non-equilibrium effects in reacting gas mixtures (Kolesnichenko and Gorbachev in Appl Math Model 34:3778-3790, 2010; Shock Waves 23:635-648, 2013; Shock Waves 27:333-374, 2017). Non-equilibrium effects in chemical reactions are caused by the chemical reactions themselves and therefore these contributions should be taken into account in the corresponding expressions for reaction rates. Corrections to quasi-equilibrium reaction rates are of two types: (a) spatially homogeneous (caused by physical-chemical processes) and (b) spatially inhomogeneous (caused by gas expansion/compression processes and proportional to the velocity divergency). Both of these processes play an important role during the nucleation and are included into the proposed model. The method developed for solving the generalized Boltzmann equation for chemically reactive gases is applied for solving the set of equations of the revised quasi-chemical model. It is shown that non-equilibrium processes lead to essential deviation of the quasi-stationary distribution and therefore the nucleation rate from its traditional form.

  5. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  6. A simple two-stage design for quantitative responses with application to a study in diabetic neuropathic pain.

    PubMed

    Whitehead, John; Valdés-Márquez, Elsa; Lissmats, Agneta

    2009-01-01

    Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. Copyright 2008 John Wiley & Sons, Ltd.

  7. Investigating Addiction in the Changing Universe

    PubMed Central

    Dastoury, Mojgan; Aminaee, Tayebe; Ghaumi, Raheleh

    2014-01-01

    The process of globalization as the most significant characteristic of modern era is facilitated by several factors including Information Technology, the industry of production and distribution of information, the flow of goods, services, human beings, capitals, information and etc. This phenomenon, along with the complex and various identities and life styles created by the national and transnational determinants, has widely changed the nature of social phenomena, including addition. The present study aims to investigate the contribution of sociological studies in the field of addiction during 2001 to 2011 in Iran. This is done through performing content analysis on 41 peer reviewed papers. The selected samples were surveyed and compared according to theoretical frameworks and the social groups under study. The results showed that the analyzed papers extensively overlooked the process of contemporary social changes in Iran which could be caused either by the theoretical basis of the studies or the social groups under study. Following the theoretical views of previous decades, these papers largely considered addiction as a type of social deviation and misbehavior related to the men living in urban areas. PMID:25363096

  8. Dysprosium sorption by polymeric composite bead: robust parametric optimization using Taguchi method.

    PubMed

    Yadav, Kartikey K; Dasgupta, Kinshuk; Singh, Dhruva K; Varshney, Lalit; Singh, Harvinderpal

    2015-03-06

    Polyethersulfone-based beads encapsulating di-2-ethylhexyl phosphoric acid have been synthesized and evaluated for the recovery of rare earth values from the aqueous media. Percentage recovery and the sorption behavior of Dy(III) have been investigated under wide range of experimental parameters using these beads. Taguchi method utilizing L-18 orthogonal array has been adopted to identify the most influential process parameters responsible for higher degree of recovery with enhanced sorption of Dy(III) from chloride medium. Analysis of variance indicated that the feed concentration of Dy(III) is the most influential factor for equilibrium sorption capacity, whereas aqueous phase acidity influences the percentage recovery most. The presence of polyvinyl alcohol and multiwalled carbon nanotube modified the internal structure of the composite beads and resulted in uniform distribution of organic extractant inside polymeric matrix. The experiment performed under optimum process conditions as predicted by Taguchi method resulted in enhanced Dy(III) recovery and sorption capacity by polymeric beads with minimum standard deviation. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Diffraction-limited 577 nm true-yellow laser by frequency doubling of a tapered diode laser

    NASA Astrophysics Data System (ADS)

    Christensen, Mathias; Vilera, Mariafernanda; Noordegraaf, Danny; Hansen, Anders K.; Buß, Thomas; Jensen, Ole B.; Skovgaard, Peter M. W.

    2018-02-01

    A wide range of laser medical treatments are based on coagulation of blood by absorption of the laser radiation. It has, therefore, always been a goal of these treatments to maximize the ratio of absorption in the blood to that in the surrounding tissue. For this purpose lasers at 577 nm are ideal since this wavelength is at the peak of the absorption in oxygenated hemoglobin. Furthermore, 577 nm has a lower absorption in melanin when compared to green wavelengths (515 - 532 nm), giving it an advantage when treating at greater penetration depth. Here we present a laser system based on frequency doubling of an 1154 nm Distributed Bragg Reflector (DBR) tapered diode laser, emitting 1.1 W of single frequency and diffraction limited yellow light at 577 nm, corresponding to a conversion efficiency of 30.5%. The frequency doubling is performed in a single pass configuration using a cascade of two bulk non-linear crystals. The system is power stabilized over 10 hours with a standard deviation of 0.13% and the relative intensity noise is measured to be 0.064 % rms.

  10. Simplified method for the calculation of irregular waves in the coastal zone

    NASA Astrophysics Data System (ADS)

    Leont'ev, I. O.

    2011-04-01

    A method applicable for the estimation of the wave parameters along a set bottom profile is suggested. It takes into account the principal processes having an influence on the waves in the coastal zone: the transformation, refraction, bottom friction, and breaking. The ability to use a constant mean value of the friction coefficient under conditions of sandy shores is implied. The wave breaking is interpreted from the viewpoint of the concept of the limiting wave height at a given depth. The mean and root-mean-square wave heights are determined by the height distribution function, which transforms under the effect of the breaking. The verification of the method on the basis of the natural data shows that the calculation results reproduce the observed variations of the wave heights in a wide range of conditions, including profiles with underwater bars. The deviations from the calculated values mostly do not exceed 25%, and the mean square error is 11%. The method does not require a preliminary setting and can be implemented in the form of a relatively simple calculator accessible even for an inexperienced user.

  11. Using Image Processing to Determine Emphysema Severity

    NASA Astrophysics Data System (ADS)

    McKenzie, Alexander; Sadun, Alberto

    2010-10-01

    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  12. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  13. Investigation and Verification of the Aerodynamic Performance of a Fan/Booster with Through-flow Method

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoheng; Jin, Donghai; Gui, Xingmin

    2018-04-01

    Through-flow method is still widely applied in the revolution of the design of a turbomachinery, which can provide not merely the performance characteristic but also the flow field. In this study, a program based on the through-flow method was proposed, which had been verified by many other numerical examples. So as to improve the accuracy of the calculation, abundant loss and deviation models dependent on the real geometry of engine were put into use, such as: viscous losses, overflow in gaps, leakage from a flow path through seals. By means of this program, the aerodynamic performance of a certain high through-flow commercial fan/booster was investigated. On account of the radial distributions of the relevant parameters, flow deterioration in this machine was speculated. To confirm this surmise, 3-D numerical simulation was carried out with the help of the NUMECA software. Through detailed analysis, the speculation above was demonstrated, which provide sufficient evidence for the conclusion that the through-flow method is an essential and effective method for the performance prediction of the fan/booster.

  14. Temperature dependent structural properties and bending rigidity of pristine and defective hexagonal boron nitride

    NASA Astrophysics Data System (ADS)

    Thomas, Siby; Ajith, K. M.; Chandra, Sharat; Valsakumar, M. C.

    2015-08-01

    Structural and thermodynamical properties of monolayer pristine and defective boron nitride sheets (h-BN) have been investigated in a wide temperature range by carrying out atomistic simulations using a tuned Tersoff-type inter-atomic empirical potential. The temperature dependence of lattice parameter, radial distribution function, specific heat at constant volume, linear thermal expansion coefficient and the height correlation function of the thermally excited ripples on pristine as well as defective h-BN sheet have been investigated. Specific heat shows considerable increase beyond the Dulong-Petit limit at high temperatures, which is interpreted as a signature of strong anharmonicity present in h-BN. Analysis of the height fluctuations, < {{h}2}> , shows that the bending rigidity and variance of height fluctuations are strongly temperature dependent and this is explained using the continuum theory of membranes. A detailed study of the height-height correlation function shows deviation from the prediction of harmonic theory of membranes as a consequence of the strong anharmonicity in h-BN. It is also seen that the variance of the height fluctuations increases with defect concentration.

  15. Aircraft noise-induced awakenings are more reasonably predicted from relative than from absolute sound exposure levels.

    PubMed

    Fidell, Sanford; Tabachnick, Barbara; Mestre, Vincent; Fidell, Linda

    2013-11-01

    Assessment of aircraft noise-induced sleep disturbance is problematic for several reasons. Current assessment methods are based on sparse evidence and limited understandings; predictions of awakening prevalence rates based on indoor absolute sound exposure levels (SELs) fail to account for appreciable amounts of variance in dosage-response relationships and are not freely generalizable from airport to airport; and predicted awakening rates do not differ significantly from zero over a wide range of SELs. Even in conjunction with additional predictors, such as time of night and assumed individual differences in "sensitivity to awakening," nominally SEL-based predictions of awakening rates remain of limited utility and are easily misapplied and misinterpreted. Probabilities of awakening are more closely related to SELs scaled in units of standard deviates of local distributions of aircraft SELs, than to absolute sound levels. Self-selection of residential populations for tolerance of nighttime noise and habituation to airport noise environments offer more parsimonious and useful explanations for differences in awakening rates at disparate airports than assumed individual differences in sensitivity to awakening.

  16. Convergent chaos

    NASA Astrophysics Data System (ADS)

    Pradas, Marc; Pumir, Alain; Huber, Greg; Wilkinson, Michael

    2017-07-01

    Chaos is widely understood as being a consequence of sensitive dependence upon initial conditions. This is the result of an instability in phase space, which separates trajectories exponentially. Here, we demonstrate that this criterion should be refined. Despite their overall intrinsic instability, trajectories may be very strongly convergent in phase space over extremely long periods, as revealed by our investigation of a simple chaotic system (a realistic model for small bodies in a turbulent flow). We establish that this strong convergence is a multi-facetted phenomenon, in which the clustering is intense, widespread and balanced by lacunarity of other regions. Power laws, indicative of scale-free features, characterize the distribution of particles in the system. We use large-deviation and extreme-value statistics to explain the effect. Our results show that the interpretation of the ‘butterfly effect’ needs to be carefully qualified. We argue that the combination of mixing and clustering processes makes our specific model relevant to understanding the evolution of simple organisms. Lastly, this notion of convergent chaos, which implies the existence of conditions for which uncertainties are unexpectedly small, may also be relevant to the valuation of insurance and futures contracts.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlah, Zvonimir; Seljak, Uroš; Okumura, Teppei

    Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model eachmore » of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ∼ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.« less

  18. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE PAGES

    Yoo, Wucherl; Sim, Alex

    2016-06-24

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  19. Deviation from Normal Boltzmann Distribution of High-lying Energy Levels of Iron Atom Excited by Okamoto-cavity Microwave-induced Plasmas Using Pure Nitrogen and Nitrogen-Oxygen Gases.

    PubMed

    Wagatsuma, Kazuaki

    2015-01-01

    This paper describes several interesting excitation phenomena occurring in a microwave-induced plasma (MIP) excited with Okamoto-cavity, especially when a small amount of oxygen was mixed with nitrogen matrix in the composition of the plasma gas. An ion-to-atom ratio of iron, which was estimated from the intensity ratio of ion to atomic lines having almost the same excitation energy, was reduced by adding oxygen gas to the nitrogen MIP, eventually contributing to an enhancement in the emission intensities of the atomic lines. Furthermore, Boltzmann plots for iron atomic lines were observed in a wide range of the excitation energy from 3.4 to 6.9 eV, indicating that plots of the atomic lines having lower excitation energies (3.4 to 4.8 eV) were well fitted on a straight line while those having more than 5.5 eV deviated upwards from the linear relationship. This overpopulation would result from any other excitation process in addition to the thermal excitation that principally determines the Boltzmann distribution. A Penning-type collision with excited species of nitrogen molecules probably explains this additional excitation mechanism, in which the resulting iron ions recombine with captured electrons, followed by cascade de-excitations between closely-spaced excited levels just below the ionization limit. As a result, these high-lying levels might be more populated than the low-lying levels of iron atom. The ionization of iron would be caused less actively in the nitrogen-oxygen plasma than in a pure nitrogen plasma, because excited species of nitrogen molecule, which can provide the ionization energy in a collision with iron atom, are consumed through collisions with oxygen molecules to cause their dissociation. It was also observed that the overpopulation occurred to a lesser extent when oxygen gas was added to the nitrogen plasma. The reason for this was also attributed to decreased number density of the excited nitrogen species due to collisions with oxygen molecule.

  20. Deviation from Boltzmann distribution in excited energy levels of singly-ionized iron in an argon glow discharge plasma for atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Kashiwakura, Shunsuke; Wagatsuma, Kazuaki

    2012-01-01

    A Boltzmann plot for many iron ionic lines having excitation energies of 4.7-9.1 eV was investigated in an argon glow discharge plasma when the discharge parameters, such as the voltage/current and the gas pressure, were varied. A Grimm-style radiation source was employed in a DC voltage range of 400-800 V at argon pressures of 400-930 Pa. The plot did not follow a linear relationship over a wide range of the excitation energy, but it yielded a normal Boltzmann distribution in the range of 4.7-5.8 eV and a large overpopulation in higher-lying excitation levels of iron ion. A probable reason for this phenomenon is that excitations for higher excited energy levels of iron ion would be predominantly caused by non-thermal collisions with argon species, the internal energy of which is received by iron atoms for the ionization. Particular intense ionic lines, which gave a maximum peak of the Boltzmann plot, were observed at an excitation energy of ca. 7.7 eV. They were the Fe II 257.297-nm and the Fe II 258.111-nm lines, derived from the 3d54s4p 6P excited levels. The 3d54s4p 6P excited levels can be highly populated through a resonance charge transfer from the ground state of argon ion, because of good matching in the excitation energy as well as the conservation of the total spin before and after the collision. An enhancement factor of the emission intensity for various Fe II lines could be obtained from a deviation from the normal Boltzmann plot, which comprised the emission lines of 4.7-5.8 eV. It would roughly correspond to a contribution of the charge transfer excitation to the excited levels of iron ion, suggesting that the charge-transfer collision could elevate the number density of the corresponding excited levels by a factor of ca.104. The Boltzmann plots give important information on the reason why a variety of iron ionic lines can be emitted from glow discharge plasmas.

  1. Three-dimensional and thermal surface imaging produces reliable measures of joint shape and temperature: a potential tool for quantifying arthritis

    PubMed Central

    Spalding, Steven J; Kwoh, C Kent; Boudreau, Robert; Enama, Joseph; Lunich, Julie; Huber, Daniel; Denes, Louis; Hirsch, Raphael

    2008-01-01

    Introduction The assessment of joints with active arthritis is a core component of widely used outcome measures. However, substantial variability exists within and across examiners in assessment of these active joint counts. Swelling and temperature changes, two qualities estimated during active joint counts, are amenable to quantification using noncontact digital imaging technologies. We sought to explore the ability of three dimensional (3D) and thermal imaging to reliably measure joint shape and temperature. Methods A Minolta 910 Vivid non-contact 3D laser scanner and a Meditherm med2000 Pro Infrared camera were used to create digital representations of wrist and metacarpalphalangeal (MCP) joints. Specialized software generated 3 quantitative measures for each joint region: 1) Volume; 2) Surface Distribution Index (SDI), a marker of joint shape representing the standard deviation of vertical distances from points on the skin surface to a fixed reference plane; 3) Heat Distribution Index (HDI), representing the standard error of temperatures. Seven wrists and 6 MCP regions from 5 subjects with arthritis were used to develop and validate 3D image acquisition and processing techniques. HDI values from 18 wrist and 9 MCP regions were obtained from 17 patients with active arthritis and compared to data from 10 wrist and MCP regions from 5 controls. Standard deviation (SD), coefficient of variation (CV), and intraclass correlation coefficients (ICC) were calculated for each quantitative measure to establish their reliability. CVs for volume and SDI were <1.3% and ICCs were greater than 0.99. Results Thermal measures were less reliable than 3D measures. However, significant differences were observed between control and arthritis HDI values. Two case studies of arthritic joints demonstrated quantifiable changes in swelling and temperature corresponding with changes in symptoms and physical exam findings. Conclusion 3D and thermal imaging provide reliable measures of joint volume, shape, and thermal patterns. Further refinement may lead to the use of these technologies to improve the assessment of disease activity in arthritis. PMID:18215307

  2. Tracing kinematic (mis)alignments in CALIFA merging galaxies. Stellar and ionized gas kinematic orientations at every merger stage

    NASA Astrophysics Data System (ADS)

    Barrera-Ballesteros, J. K.; García-Lorenzo, B.; Falcón-Barroso, J.; van de Ven, G.; Lyubenova, M.; Wild, V.; Méndez-Abreu, J.; Sánchez, S. F.; Marquez, I.; Masegosa, J.; Monreal-Ibero, A.; Ziegler, B.; del Olmo, A.; Verdes-Montenegro, L.; García-Benito, R.; Husemann, B.; Mast, D.; Kehrig, C.; Iglesias-Paramo, J.; Marino, R. A.; Aguerri, J. A. L.; Walcher, C. J.; Vílchez, J. M.; Bomans, D. J.; Cortijo-Ferrero, C.; González Delgado, R. M.; Bland-Hawthorn, J.; McIntosh, D. H.; Bekeraitė, S.

    2015-10-01

    We present spatially resolved stellar and/or ionized gas kinematic properties for a sample of 103 interacting galaxies, tracing all merger stages: close companions, pairs with morphological signatures of interaction, and coalesced merger remnants. In order to distinguish kinematic properties caused by a merger event from those driven by internal processes, we compare our galaxies with a control sample of 80 non-interacting galaxies. We measure for both the stellar and the ionized gas components the major (projected) kinematic position angles (PAkin, approaching and receding) directly from the velocity distributions with no assumptions on the internal motions. This method also allow us to derive the deviations of the kinematic PAs from a straight line (δPAkin). We find that around half of the interacting objects show morpho-kinematic PA misalignments that cannot be found in the control sample. In particular, we observe those misalignments in galaxies with morphological signatures of interaction. On the other hand, thelevel of alignment between the approaching and receding sides for both samples is similar, with most of the galaxies displaying small misalignments. Radial deviations of the kinematic PA orientation from a straight line in the stellar component measured by δPAkin are large for both samples. However, for a large fraction of interacting galaxies the ionized gas δPAkin is larger than the typical values derived from isolated galaxies (48%), indicating that this parameter is a good indicator to trace the impact of interaction and mergers in the internal motions of galaxies. By comparing the stellar and ionized gas kinematic PA, we find that 42% (28/66) of the interacting galaxies have misalignments larger than 16°, compared to 10% from the control sample. Our results show the impact of interactions in the motion of stellar and ionized gas as well as the wide the variety of their spatially resolved kinematic distributions. This study also provides a local Universe benchmark for kinematic studies in merging galaxies at high redshift. Appendices are available in electronic form at http://www.aanda.org

  3. Plume particle collection and sizing from static firing of solid rocket motors

    NASA Technical Reports Server (NTRS)

    Sambamurthi, Jay K.

    1995-01-01

    A unique dart system has been designed and built at the NASA Marshall Space Flight Center to collect aluminum oxide plume particles from the plumes of large scale solid rocket motors, such as the space shuttle RSRM. The capability of this system to collect clean samples from both the vertically fired MNASA (18.3% scaled version of the RSRM) motors and the horizontally fired RSRM motor has been demonstrated. The particle mass averaged diameters, d43, measured from the samples for the different motors, ranged from 8 to 11 mu m and were independent of the dart collection surface and the motor burn time. The measured results agreed well with those calculated using the industry standard Hermsen's correlation within the standard deviation of the correlation . For each of the samples analyzed from both MNASA and RSRM motors, the distribution of the cumulative mass fraction of the plume oxide particles as a function of the particle diameter was best described by a monomodal log-normal distribution with a standard deviation of 0.13 - 0.15. This distribution agreed well with the theoretical prediction by Salita using the OD3P code for the RSRM motor at the nozzle exit plane.

  4. Optimization of a new flow design for solid oxide cells using computational fluid dynamics modelling

    NASA Astrophysics Data System (ADS)

    Duhn, Jakob Dragsbæk; Jensen, Anker Degn; Wedel, Stig; Wix, Christian

    2016-12-01

    Design of a gas distributor to distribute gas flow into parallel channels for Solid Oxide Cells (SOC) is optimized, with respect to flow distribution, using Computational Fluid Dynamics (CFD) modelling. The CFD model is based on a 3d geometric model and the optimized structural parameters include the width of the channels in the gas distributor and the area in front of the parallel channels. The flow of the optimized design is found to have a flow uniformity index value of 0.978. The effects of deviations from the assumptions used in the modelling (isothermal and non-reacting flow) are evaluated and it is found that a temperature gradient along the parallel channels does not affect the flow uniformity, whereas a temperature difference between the channels does. The impact of the flow distribution on the maximum obtainable conversion during operation is also investigated and the obtainable overall conversion is found to be directly proportional to the flow uniformity. Finally the effect of manufacturing errors is investigated. The design is shown to be robust towards deviations from design dimensions of at least ±0.1 mm which is well within obtainable tolerances.

  5. Test of quantum thermalization in the two-dimensional transverse-field Ising model

    PubMed Central

    Blaß, Benjamin; Rieger, Heiko

    2016-01-01

    We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems. PMID:27905523

  6. Incidence and rates of visual field progression after longitudinally measured optic disc change in glaucoma.

    PubMed

    Chauhan, Balwantray C; Nicolela, Marcelo T; Artes, Paul H

    2009-11-01

    To determine whether glaucoma patients with progressive optic disc change have subsequent visual field progression earlier and at a faster rate compared with those without disc change. Prospective, longitudinal, cohort study. Eighty-one patients with open-angle glaucoma. Patients underwent confocal scanning laser tomography and standard automated perimetry every 6 months. The complete follow-up was divided into initial and subsequent periods. Two initial periods-first 3 years (Protocol A) and first half of the total follow-up (Protocol B)-were used, with the respective remainder being the subsequent follow-up. Disc change during the initial follow-up was determined with liberal, moderate, or conservative criteria of the Topographic Change Analysis. Subsequent field progression was determined with significant pattern deviation change in >or=3 locations (criterion used in the Early Manifest Glaucoma Trial). As a control analysis, field change during the initial follow-up was determined with significant pattern deviation change in >or=1, >or=2, or >or=3 locations. Survival time to subsequent field progression, rates of mean deviation (MD) change, and positive and negative likelihood ratios. The median (interquartile range) total follow-up was 11.0 (8.0-12.0) years with 22 (18-24) examinations. More patients had disc changes during the initial follow-up compared with field changes. The mean time to field progression was consistently shorter (protocol A, 0.8-1.7 years; protocol B, 0.3-0.7 years) in patients with prior disc change. In the control analysis, patients with prior field change had statistically earlier subsequent field progression (protocol A, 2.9-3.0 years; protocol B, 0.7-0.9). Similarly, patients with either prior disc or field change always had worse mean rates of subsequent MD change, although the distributions overlapped widely. Patients with subsequent field progression were up to 3 times more likely to have prior disc change compared with those without, and up to 5 times more likely to have prior field change compared with those without. Longitudinally measured optic disc change is predictive of subsequent visual field progression and may be an efficacious end point for functional outcomes in clinical studies and trials in glaucoma.

  7. Photospheric Magnetic Field Properties of Flaring versus Flare-quiet Active Regions. II. Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Leka, K. D.; Barnes, G.

    2003-10-01

    We apply statistical tests based on discriminant analysis to the wide range of photospheric magnetic parameters described in a companion paper by Leka & Barnes, with the goal of identifying those properties that are important for the production of energetic events such as solar flares. The photospheric vector magnetic field data from the University of Hawai'i Imaging Vector Magnetograph are well sampled both temporally and spatially, and we include here data covering 24 flare-event and flare-quiet epochs taken from seven active regions. The mean value and rate of change of each magnetic parameter are treated as separate variables, thus evaluating both the parameter's state and its evolution, to determine which properties are associated with flaring. Considering single variables first, Hotelling's T2-tests show small statistical differences between flare-producing and flare-quiet epochs. Even pairs of variables considered simultaneously, which do show a statistical difference for a number of properties, have high error rates, implying a large degree of overlap of the samples. To better distinguish between flare-producing and flare-quiet populations, larger numbers of variables are simultaneously considered; lower error rates result, but no unique combination of variables is clearly the best discriminator. The sample size is too small to directly compare the predictive power of large numbers of variables simultaneously. Instead, we rank all possible four-variable permutations based on Hotelling's T2-test and look for the most frequently appearing variables in the best permutations, with the interpretation that they are most likely to be associated with flaring. These variables include an increasing kurtosis of the twist parameter and a larger standard deviation of the twist parameter, but a smaller standard deviation of the distribution of the horizontal shear angle and a horizontal field that has a smaller standard deviation but a larger kurtosis. To support the ``sorting all permutations'' method of selecting the most frequently occurring variables, we show that the results of a single 10-variable discriminant analysis are consistent with the ranking. We demonstrate that individually, the variables considered here have little ability to differentiate between flaring and flare-quiet populations, but with multivariable combinations, the populations may be distinguished.

  8. The Hidden Fortress: structure and substructure of the complex strong lensing cluster SDSS J1029+2623

    NASA Astrophysics Data System (ADS)

    Oguri, Masamune; Schrabback, Tim; Jullo, Eric; Ota, Naomi; Kochanek, Christopher S.; Dai, Xinyu; Ofek, Eran O.; Richards, Gordon T.; Blandford, Roger D.; Falco, Emilio E.; Fohlmeister, Janine

    2013-02-01

    We present Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) and Wide Field Camera 3 (WFC3) observations of SDSS J1029+2623, a three-image quasar lens system produced by a foreground cluster at z = 0.584. Our strong lensing analysis reveals six additional multiply imaged galaxies in addition to the multiply imaged quasar. We confirm the complex nature of the mass distribution of the lensing cluster, with a bimodal dark matter distribution which deviates from the Chandra X-ray surface brightness distribution. The Einstein radius of the lensing cluster is estimated to be θE = 15.2 ± 0.5 arcsec for the quasar redshift of z = 2.197. We derive a radial mass distribution from the combination of strong lensing, HST/ACS weak lensing and Subaru/Suprime-cam weak lensing analysis results, finding a best-fitting virial mass of Mvir = 1.55+ 0.40- 0.35 × 1014 h- 1 M⊙ and a concentration parameter of cvir = 25.7+ 14.1- 7.5. The lensing mass estimate at the outer radius is smaller than the X-ray mass estimate by a factor of ˜2. We ascribe this large mass discrepancy to shock heating of the intracluster gas during a merger, which is also suggested by the complex mass and gas distributions and the high value of the concentration parameter. In the HST image, we also identify a probable galaxy, GX, in the vicinity of the faintest quasar image C. In strong lens models, the inclusion of GX explains the anomalous flux ratios between the quasar images. The morphology of the highly elongated quasar host galaxy is also well reproduced. The best-fitting model suggests large total magnifications of 30 for the quasar and 35 for the quasar host galaxy, and has an AB time delay consistent with the measured value.

  9. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  10. Identifying and Characterizing Kinetic Instabilities using Solar Wind Observations of Non-Maxwellian Plasmas

    NASA Astrophysics Data System (ADS)

    Klein, K. G.

    2016-12-01

    Weakly collisional plasmas, of the type typically observed in the solar wind, are commonly in a state other than local thermodynamic equilibrium. This deviation from a Maxwellian velocity distribution can be characterized by pressure anisotropies, disjoint beams streaming at differing speeds, leptokurtic distributions at large energies, and other non-thermal features. As these features may be artifacts of dynamic processes, including the the acceleration and expansion of the solar wind, and as the free energy contained in these features can drive kinetic micro-instabilities, accurate measurement and modeling of these features is essential for characterizing the solar wind. After a review of these features, a technique is presented for the efficient calculation of kinetic instabilities associated with a general, non-Maxwellian plasma. As a proof of principle, this technique is applied to bi-Maxwellian systems for which kinetic instability thresholds are known, focusing on parameter scans including beams and drifting heavy minor ions. The application of this technique to fits of velocity distribution functions from current, forthcoming, and proposed missions including WIND, DSCOVR, Solar Probe Plus, and THOR, as well as the underlying measured distribution functions, is discussed. Particular attention is paid to the effects of instrument pointing and integration time, as well as potential deviation between instabilities associated with the Maxwellian fits and those associated with the observed, potentially non-Maxwellian, velocity distribution. Such application may further illuminate the role instabilities play in the evolution of the solar wind.

  11. Measuring inequality: tools and an illustration.

    PubMed

    Williams, Ruth F G; Doessel, D P

    2006-05-22

    This paper examines an aspect of the problem of measuring inequality in health services. The measures that are commonly applied can be misleading because such measures obscure the difficulty in obtaining a complete ranking of distributions. The nature of the social welfare function underlying these measures is important. The overall object is to demonstrate that varying implications for the welfare of society result from inequality measures. Various tools for measuring a distribution are applied to some illustrative data on four distributions about mental health services. Although these data refer to this one aspect of health, the exercise is of broader relevance than mental health. The summary measures of dispersion conventionally used in empirical work are applied to the data here, such as the standard deviation, the coefficient of variation, the relative mean deviation and the Gini coefficient. Other, less commonly used measures also are applied, such as Theil's Index of Entropy, Atkinson's Measure (using two differing assumptions about the inequality aversion parameter). Lorenz curves are also drawn for these distributions. Distributions are shown to have differing rankings (in terms of which is more equal than another), depending on which measure is applied. The scope and content of the literature from the past decade about health inequalities and inequities suggest that the economic literature from the past 100 years about inequality and inequity may have been overlooked, generally speaking, in the health inequalities and inequity literature. An understanding of economic theory and economic method, partly introduced in this article, is helpful in analysing health inequality and inequity.

  12. Mean and Fluctuating Force Distribution in a Random Array of Spheres

    NASA Astrophysics Data System (ADS)

    Akiki, Georges; Jackson, Thomas; Balachandar, Sivaramakrishnan

    2015-11-01

    This study presents a numerical study of the force distribution within a cluster of mono-disperse spherical particles. A direct forcing immersed boundary method is used to calculate the forces on individual particles for a volume fraction range of [0.1, 0.4] and a Reynolds number range of [10, 625]. The overall drag is compared to several drag laws found in the literature. As for the fluctuation of the hydrodynamic streamwise force among individual particles, it is shown to have a normal distribution with a standard deviation that varies with the volume fraction only. The standard deviation remains approximately 25% of the mean streamwise force on a single sphere. The force distribution shows a good correlation between the location of two to three nearest upstream and downstream neighbors and the magnitude of the forces. A detailed analysis of the pressure and shear forces contributions calculated on a ghost sphere in the vicinity of a single particle in a uniform flow reveals a mapping of those contributions. The combination of the mapping and number of nearest neighbors leads to a first order correction of the force distribution within a cluster which can be used in Lagrangian-Eulerian techniques. We also explore the possibility of a binary force model that systematically accounts for the effect of the nearest neighbors. This work was supported by the National Science Foundation (NSF OISE-0968313) under Partnership for International Research and Education (PIRE) in Multiphase Flows at the University of Florida.

  13. Pore Size Distributions Inferred from Modified Inversion Percolation Modeling of Drainage Curves

    NASA Astrophysics Data System (ADS)

    Dralus, D. E.; Wang, H. F.; Strand, T. E.; Glass, R. J.; Detwiler, R. L.

    2005-12-01

    Experiments have been conducted of drainage in sand packs. At equilibrium, the interface between the fluids forms a saturation transition fringe where the saturation decreases monotonically with height. This behavior was observed in a 1-inch thick pack of 20-30 sand contained front and back within two thin, 12-inch-by-24-inch glass plates. The translucent chamber was illuminated from behind by a bank of fluorescent bulbs. Acquired data were in the form of images captured by a CCD camera with resolution on the grain scale. The measured intensity of the transmitted light was used to calculate the average saturation at each point in the chamber. This study used a modified invasion percolation (MIP) model to simulate the drainage experiments to evaluate the relationship between the saturation-versus-height curve at equilibrium and the pore size distribution associated with the granular medium. The simplest interpretation of a drainage curve is in terms of a distribution of capillary tubes whose radii reproduce the the observed distribution of rise heights. However, this apparent radius distribution obtained from direct inversion of the saturation profile did not yield the assumed radius distribution. Further investigation demonstrated that the equilibrium height distribution is controlled primarily by the Bond number (ratio of gravity to capillary forces) with some influence from the width of the pore radius distribution. The width of the equilibrium fringe is quantified in terms of the ratio of Bond number to the standard deviation of the pore throat distribution. The normalized saturation-vs-height curves exhibit a power-law scaling behavior consistent with both Brooks-Corey and Van Genuchten type curves. Fundamental tenets of percolation theory were used to quantify the relationship between the apparent and actual radius distributions as a function of the mean coordination number and of the ratio of Bond number to standard deviation, which was supported by both MIP simulations and corresponding drainage experiments.

  14. Fiber optic reference frequency distribution to remote beam waveguide antennas

    NASA Technical Reports Server (NTRS)

    Calhoun, Malcolm; Kuhnle, Paul; Law, Julius

    1995-01-01

    In the NASA/JPL Deep Space Network (DSN), radio science experiments (probing outer planet atmospheres, rings, gravitational waves, etc.) and very long-base interferometry (VLBI) require ultra-stable, low phase noise reference frequency signals at the user locations. Typical locations for radio science/VLBI exciters and down-converters are the cone areas of the 34 m high efficiency antennas or the 70 m antennas, located several hundred meters from the reference frequency standards. Over the past three years, fiber optic distribution links have replaced coaxial cable distribution for reference frequencies to these antenna sites. Optical fibers are the preferred medium for distribution because of their low attenuation, immunity to EMI/IWI, and temperature stability. A new network of Beam Waveguide (BWG) antennas presently under construction in the DSN requires hydrogen maser stability at tens of kilometers distance from the frequency standards central location. The topic of this paper is the design and implementation of an optical fiber distribution link which provides ultra-stable reference frequencies to users at a remote BWG antenna. The temperature profile from the earth's surface to a depth of six feet over a time period of six months was used to optimize the placement of the fiber optic cables. In-situ evaluation of the fiber optic link performance indicates Allan deviation on the order of parts in 10(exp -15) at 1000 and 10,000 seconds averaging time; thus, the link stability degradation due to environmental conditions still preserves hydrogen maser stability at the user locations. This paper reports on the implementation of optical fibers and electro-optic devices for distributing very stable, low phase noise reference signals to remote BWG antenna locations. Allan deviation and phase noise test results for a 16 km fiber optic distribution link are presented in the paper.

  15. Fiber optic reference frequency distribution to remote beam waveguide antennas

    NASA Astrophysics Data System (ADS)

    Calhoun, Malcolm; Kuhnle, Paul; Law, Julius

    1995-05-01

    In the NASA/JPL Deep Space Network (DSN), radio science experiments (probing outer planet atmospheres, rings, gravitational waves, etc.) and very long-base interferometry (VLBI) require ultra-stable, low phase noise reference frequency signals at the user locations. Typical locations for radio science/VLBI exciters and down-converters are the cone areas of the 34 m high efficiency antennas or the 70 m antennas, located several hundred meters from the reference frequency standards. Over the past three years, fiber optic distribution links have replaced coaxial cable distribution for reference frequencies to these antenna sites. Optical fibers are the preferred medium for distribution because of their low attenuation, immunity to EMI/IWI, and temperature stability. A new network of Beam Waveguide (BWG) antennas presently under construction in the DSN requires hydrogen maser stability at tens of kilometers distance from the frequency standards central location. The topic of this paper is the design and implementation of an optical fiber distribution link which provides ultra-stable reference frequencies to users at a remote BWG antenna. The temperature profile from the earth's surface to a depth of six feet over a time period of six months was used to optimize the placement of the fiber optic cables. In-situ evaluation of the fiber optic link performance indicates Allan deviation on the order of parts in 10(exp -15) at 1000 and 10,000 seconds averaging time; thus, the link stability degradation due to environmental conditions still preserves hydrogen maser stability at the user locations. This paper reports on the implementation of optical fibers and electro-optic devices for distributing very stable, low phase noise reference signals to remote BWG antenna locations. Allan deviation and phase noise test results for a 16 km fiber optic distribution link are presented in the paper.

  16. Regional volumes and spatial volumetric distribution of gray matter in the gender dysphoric brain.

    PubMed

    Hoekzema, Elseline; Schagen, Sebastian E E; Kreukels, Baudewijntje P C; Veltman, Dick J; Cohen-Kettenis, Peggy T; Delemarre-van de Waal, Henriette; Bakker, Julie

    2015-05-01

    The sexual differentiation of the brain is primarily driven by gonadal hormones during fetal development. Leading theories on the etiology of gender dysphoria (GD) involve deviations herein. To examine whether there are signs of a sex-atypical brain development in GD, we quantified regional neural gray matter (GM) volumes in 55 female-to-male and 38 male-to-female adolescents, 44 boys and 52 girls without GD and applied both univariate and multivariate analyses. In girls, more GM volume was observed in the left superior medial frontal cortex, while boys had more volume in the bilateral superior posterior hemispheres of the cerebellum and the hypothalamus. Regarding the GD groups, at whole-brain level they differed only from individuals sharing their gender identity but not from their natal sex. Accordingly, using multivariate pattern recognition analyses, the GD groups could more accurately be automatically discriminated from individuals sharing their gender identity than those sharing their natal sex based on spatially distributed GM patterns. However, region of interest analyses indicated less GM volume in the right cerebellum and more volume in the medial frontal cortex in female-to-males in comparison to girls without GD, while male-to-females had less volume in the bilateral cerebellum and hypothalamus than natal boys. Deviations from the natal sex within sexually dimorphic structures were also observed in the untreated subsamples. Our findings thus indicate that GM distribution and regional volumes in GD adolescents are largely in accordance with their respective natal sex. However, there are subtle deviations from the natal sex in sexually dimorphic structures, which can represent signs of a partial sex-atypical differentiation of the brain. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. How to model moon signals using 2-dimensional Gaussian function: Classroom activity for measuring nighttime cloud cover

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Lagrosas, N.

    2016-12-01

    Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.

  18. The impact of physical and mental tasks on pilot mental workoad

    NASA Technical Reports Server (NTRS)

    Berg, S. L.; Sheridan, T. B.

    1986-01-01

    Seven instrument-rated pilots with a wide range of backgrounds and experience levels flew four different scenarios on a fixed-base simulator. The Baseline scenario was the simplest of the four and had few mental and physical tasks. An activity scenario had many physical but few mental tasks. The Planning scenario had few physical and many mental taks. A Combined scenario had high mental and physical task loads. The magnitude of each pilot's altitude and airspeed deviations was measured, subjective workload ratings were recorded, and the degree of pilot compliance with assigned memory/planning tasks was noted. Mental and physical performance was a strong function of the manual activity level, but not influenced by the mental task load. High manual task loads resulted in a large percentage of mental errors even under low mental task loads. Although all the pilots gave similar subjective ratings when the manual task load was high, subjective ratings showed greater individual differences with high mental task loads. Altitude or airspeed deviations and subjective ratings were most correlated when the total task load was very high. Although airspeed deviations, altitude deviations, and subjective workload ratings were similar for both low experience and high experience pilots, at very high total task loads, mental performance was much lower for the low experience pilots.

  19. Impact of buildings on surface solar radiation over urban Beijing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Bin; Liou, Kuo-Nan; Gu, Yu

    The rugged surface of an urban area due to varying buildings can interact with solar beams and affect both the magnitude and spatiotemporal distribution of surface solar fluxes. Here we systematically examine the impact of buildings on downward surface solar fluxes over urban Beijing by using a 3-D radiation parameterization that accounts for 3-D building structures vs. the conventional plane-parallel scheme. We find that the resulting downward surface solar flux deviations between the 3-D and the plane-parallel schemes are generally ±1–10 W m -2 at 800 m grid resolution and within ±1 W m -2 at 4 km resolution. Pairsmore » of positive–negative flux deviations on different sides of buildings are resolved at 800 m resolution, while they offset each other at 4 km resolution. Flux deviations from the unobstructed horizontal surface at 4 km resolution are positive around noon but negative in the early morning and late afternoon. The corresponding deviations at 800 m resolution, in contrast, show diurnal variations that are strongly dependent on the location of the grids relative to the buildings. Both the magnitude and spatiotemporal variations of flux deviations are largely dominated by the direct flux. Furthermore, we find that flux deviations can potentially be an order of magnitude larger by using a finer grid resolution. Atmospheric aerosols can reduce the magnitude of downward surface solar flux deviations by 10–65 %, while the surface albedo generally has a rather moderate impact on flux deviations. The results imply that the effect of buildings on downward surface solar fluxes may not be critically significant in mesoscale atmospheric models with a grid resolution of 4 km or coarser. However, the effect can play a crucial role in meso-urban atmospheric models as well as microscale urban dispersion models with resolutions of 1 m to 1 km.« less

  20. Analysis of all-optical temporal integrator employing phased-shifted DFB-SOA.

    PubMed

    Jia, Xin-Hong; Ji, Xiao-Ling; Xu, Cong; Wang, Zi-Nan; Zhang, Wei-Li

    2014-11-17

    All-optical temporal integrator using phase-shifted distributed-feedback semiconductor optical amplifier (DFB-SOA) is investigated. The influences of system parameters on its energy transmittance and integration error are explored in detail. The numerical analysis shows that, enhanced energy transmittance and integration time window can be simultaneously achieved by increased injected current in the vicinity of lasing threshold. We find that the range of input pulse-width with lower integration error is highly sensitive to the injected optical power, due to gain saturation and induced detuning deviation mechanism. The initial frequency detuning should also be carefully chosen to suppress the integration deviation with ideal waveform output.

  1. QED is not endangered by the proton's size

    NASA Astrophysics Data System (ADS)

    De Rújula, A.

    2010-10-01

    Pohl et al. have reported a very precise measurement of the Lamb-shift in muonic hydrogen (Pohl et al., 2010) [1], from which they infer the radius characterizing the proton's charge distribution. The result is 5 standard deviations away from the one of the CODATA compilation of physical constants. This has been interpreted (Pohl et al., 2010) [1] as possibly requiring a 4.9 standard-deviation modification of the Rydberg constant, to a new value that would be precise to 3.3 parts in 1013, as well as putative evidence for physics beyond the standard model (Flowers, 2010) [2]. I demonstrate that these options are unsubstantiated.

  2. Common inputs in subthreshold membrane potential: The role of quiescent states in neuronal activity

    NASA Astrophysics Data System (ADS)

    Montangie, Lisandro; Montani, Fernando

    2018-06-01

    Experiments in certain regions of the cerebral cortex suggest that the spiking activity of neuronal populations is regulated by common non-Gaussian inputs across neurons. We model these deviations from random-walk processes with q -Gaussian distributions into simple threshold neurons, and investigate the scaling properties in large neural populations. We show that deviations from the Gaussian statistics provide a natural framework to regulate population statistics such as sparsity, entropy, and specific heat. This type of description allows us to provide an adequate strategy to explain the information encoding in the case of low neuronal activity and its possible implications on information transmission.

  3. LOKI WIND CORRECTION COMPUTER AND WIND STUDIES FOR LOKI

    DTIC Science & Technology

    which relates burnout deviation of flight path with the distributed wind along the boost trajectory. The wind influence function was applied to...electrical outputs. A complete wind correction computer system based on the influence function and the results of wind studies was designed.

  4. Descriptive Statistics and Cluster Analysis for Extreme Rainfall in Java Island

    NASA Astrophysics Data System (ADS)

    E Komalasari, K.; Pawitan, H.; Faqih, A.

    2017-03-01

    This study aims to describe regional pattern of extreme rainfall based on maximum daily rainfall for period 1983 to 2012 in Java Island. Descriptive statistics analysis was performed to obtain centralization, variation and distribution of maximum precipitation data. Mean and median are utilized to measure central tendency data while Inter Quartile Range (IQR) and standard deviation are utilized to measure variation of data. In addition, skewness and kurtosis used to obtain shape the distribution of rainfall data. Cluster analysis using squared euclidean distance and ward method is applied to perform regional grouping. Result of this study show that mean (average) of maximum daily rainfall in Java Region during period 1983-2012 is around 80-181mm with median between 75-160mm and standard deviation between 17 to 82. Cluster analysis produces four clusters and show that western area of Java tent to have a higher annual maxima of daily rainfall than northern area, and have more variety of annual maximum value.

  5. Heterogeneity-induced large deviations in activity and (in some cases) entropy production

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.

    2014-10-01

    We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.

  6. Small-Signal Analysis of Autonomous Hybrid Distributed Generation Systems in Presence of Ultracapacitor and Tie-Line Operation

    NASA Astrophysics Data System (ADS)

    Ray, Prakash K.; Mohanty, Soumya R.; Kishor, Nand

    2010-07-01

    This paper presents small-signal analysis of isolated as well as interconnected autonomous hybrid distributed generation system for sudden variation in load demand, wind speed and solar radiation. The hybrid systems comprise of different renewable energy resources such as wind, photovoltaic (PV) fuel cell (FC) and diesel engine generator (DEG) along with the energy storage devices such as flywheel energy storage system (FESS) and battery energy storage system (BESS). Further ultracapacitors (UC) as an alternative energy storage element and interconnection of hybrid systems through tie-line is incorporated into the system for improved performance. A comparative assessment of deviation of frequency profile for different hybrid systems in the presence of different storage system combinations is carried out graphically as well as in terms of the performance index (PI), ie integral square error (ISE). Both qualitative and quantitative analysis reflects the improvements of the deviation in frequency profiles in the presence of the ultracapacitors (UC) as compared to other energy storage elements.

  7. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  8. Slant path L- and S-Band tree shadowing measurements

    NASA Technical Reports Server (NTRS)

    Vogel, Wolfhard J.; Torrence, Geoffrey W.

    1994-01-01

    This contribution presents selected results from simultaneous L- and S-Band slant-path fade measurements through a pecan, a cottonwood, and a pine tree employing a tower-mounted transmitter and dual-frequency receiver. A single, circularly-polarized antenna was used at each end of the link. The objective was to provide information for personal communications satellite design on the correlation of tree shadowing between frequencies near 1620 and 2500 MHz. Fades were measured along 10 m lateral distance with 5 cm spacing. Instantaneous fade differences between L- and S-Band exhibited normal distribution with means usually near 0 dB and standard deviations from 5.2 to 7.5 dB. The cottonwood tree was an exception, with 5.4 dB higher average fading at S- than at L-Band. The spatial autocorrelation reduced to near zero with lags of about 10 lambda. The fade slope in dB/MHz is normally distributed with zero mean and standard deviation increasing with fade level.

  9. Slant path L- and S-Band tree shadowing measurements

    NASA Astrophysics Data System (ADS)

    Vogel, Wolfhard J.; Torrence, Geoffrey W.

    1994-08-01

    This contribution presents selected results from simultaneous L- and S-Band slant-path fade measurements through a pecan, a cottonwood, and a pine tree employing a tower-mounted transmitter and dual-frequency receiver. A single, circularly-polarized antenna was used at each end of the link. The objective was to provide information for personal communications satellite design on the correlation of tree shadowing between frequencies near 1620 and 2500 MHz. Fades were measured along 10 m lateral distance with 5 cm spacing. Instantaneous fade differences between L- and S-Band exhibited normal distribution with means usually near 0 dB and standard deviations from 5.2 to 7.5 dB. The cottonwood tree was an exception, with 5.4 dB higher average fading at S- than at L-Band. The spatial autocorrelation reduced to near zero with lags of about 10 lambda. The fade slope in dB/MHz is normally distributed with zero mean and standard deviation increasing with fade level.

  10. Unified approach to probing Coulomb effects in tunnel ionization for any ellipticity of laser light.

    PubMed

    Landsman, A S; Hofmann, C; Pfeiffer, A N; Cirelli, C; Keller, U

    2013-12-27

    We present experimental data that show significant deviations from theoretical predictions for the location of the center of the electron momenta distribution at low values of ellipticity ε of laser light. We show that these deviations are caused by significant Coulomb focusing along the minor axis of polarization, something that is normally neglected in the analysis of electron dynamics, even in cases where the Coulomb correction is otherwise taken into account. By investigating ellipticity-resolved electron momenta distributions in the plane of polarization, we show that Coulomb focusing predominates at lower values of ellipticity of laser light, while Coulomb asymmetry becomes important at higher values, showing that these two complementary phenomena can be used to probe long-range Coulomb interaction at all polarizations of laser light. Our results suggest that both the breakdown of Coulomb focusing and the onset of Coulomb asymmetry are linked to the disappearance of Rydberg states with increasing ellipticity.

  11. A robust nonlinear filter for image restoration.

    PubMed

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  12. Enhanced electronic excitation energy transfer between dye molecules incorporated in nano-scale media with apparent fractal dimensionality

    NASA Astrophysics Data System (ADS)

    Yefimova, Svetlana L.; Rekalo, Andrey M.; Gnap, Bogdan A.; Viagin, Oleg G.; Sorokin, Alexander V.; Malyukin, Yuri V.

    2014-09-01

    In the present study, we analyze the efficiency of Electronic Excitation Energy Transfer (EEET) between two dyes, an energy donor (D) and acceptor (A), concentrated in structurally heterogeneous media (surfactant micelles, liposomes, and porous SiO2 matrices). In all three cases, highly effective EEET in pairs of dyes has been found and cannot be explained by Standard Förster-type theory for homogeneous solutions. Two independent approaches based on the analysis of either the D relative quantum yield () or the D fluorescence decay have been used to study the deviation of experimental results from the theoretical description of EEET process. The observed deviation is quantified by the apparent fractal distribution of molecules parameter . We conclude that the highly effective EEET observed in the nano-scale media under study can be explained by both forced concentration of the hydrophobic dyes within nano-volumes and non-uniform cluster-like character of the distribution of D and A dye molecules within nano-volumes.

  13. Quality assurance of proton beams using a multilayer ionization chamber system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhanesar, Sandeep; Sahoo, Narayan; Kerr, Matthew

    2013-09-15

    Purpose: The measurement of percentage depth-dose (PDD) distributions for the quality assurance of clinical proton beams is most commonly performed with a computerized water tank dosimetry system with ionization chamber, commonly referred to as water tank. Although the accuracy and reproducibility of this method is well established, it can be time-consuming if a large number of measurements are required. In this work the authors evaluate the linearity, reproducibility, sensitivity to field size, accuracy, and time-savings of another system: the Zebra, a multilayer ionization chamber system.Methods: The Zebra, consisting of 180 parallel-plate ionization chambers with 2 mm resolution, was used tomore » measure depth-dose distributions. The measurements were performed for scattered and scanned proton pencil beams of multiple energies delivered by the Hitachi PROBEAT synchrotron-based delivery system. For scattered beams, the Zebra-measured depth-dose distributions were compared with those measured with the water tank. The principal descriptors extracted for comparisons were: range, the depth of the distal 90% dose; spread-out Bragg peak (SOBP) length, the region between the proximal 95% and distal 90% dose; and distal-dose fall off (DDF), the region between the distal 80% and 20% dose. For scanned beams, the Zebra-measured ranges were compared with those acquired using a Bragg peak chamber during commissioning.Results: The Zebra demonstrated better than 1% reproducibility and monitor unit linearity. The response of the Zebra was found to be sensitive to radiation field sizes greater than 12.5 × 12.5 cm; hence, the measurements used to determine accuracy were performed using a field size of 10 × 10 cm. For the scattered proton beams, PDD distributions showed 1.5% agreement within the SOBP, and 3.8% outside. Range values agreed within −0.1 ± 0.4 mm, with a maximum deviation of 1.2 mm. SOBP length values agreed within 0 ± 2 mm, with a maximum deviation of 6 mm. DDF values agreed within 0.3 ± 0.1 mm, with a maximum deviation of 0.6 mm. For the scanned proton pencil beams, Zebra and Bragg peak chamber range values demonstrated agreement of 0.0 ± 0.3 mm with a maximum deviation of 1.3 mm. The setup and measurement time for all Zebra measurements was 3 and 20 times less, respectively, compared to the water tank measurements.Conclusions: Our investigation shows that the Zebra can be useful not only for fast but also for accurate measurements of the depth-dose distributions of both scattered and scanned proton beams. The analysis of a large set of measurements shows that the commonly assessed beam quality parameters obtained with the Zebra are within the acceptable variations specified by the manufacturer for our delivery system.« less

  14. Particle yields from numerical simulations

    NASA Astrophysics Data System (ADS)

    Homor, Marietta M.; Jakovác, Antal

    2018-04-01

    In this paper we use numerical field theoretical simulations to calculate particle yields. We demonstrate that in the model of local particle creation the deviation from the pure exponential distribution is natural even in equilibrium, and an approximate Tsallis-Pareto-like distribution function can be well fitted to the calculated yields, in accordance with the experimental observations. We present numerical simulations in the classical Φ4 model as well as in the SU(3) quantum Yang-Mills theory to clarify this issue.

  15. Local Stretching Theories

    DTIC Science & Technology

    2010-06-24

    diffusivity of the scalar. (If the scalar is heat, then the Schmidt number becomes the Prandtl number.) Momentum diffuses significantly faster than the...derive the Cramér function explicitly in the simple case where the xi have a Bernoulli distribution, though the general formula for S may be derived by...an analogous procedure. 5 Large deviation CLT for the Bernoulli distribution Let xi have the PDF of a fair coin, p(xi) = 1 2δ(xi + 1) + 1 2δ(xi − 1

  16. Biggs AAF, El Paso, Texas. Revised Uniform Summary of Surface Weather Observations (RUSSWO). Parts A-F

    DTIC Science & Technology

    1981-01-14

    wet-bulb temperature depression versus dry -bulb temperature, means and standard deviations of d-j-bulb, wet-bulb (over) SDD, 1473 UNCLASS IF I ED FC...distribution tables Dry -bulb temperature versud wet-bulb temperature Cumulative percentage frequency of distribution tables 20. and dew point...PART 5 PRECIPITATION PSYCHROMETRIC.DRY VS WET BULB SNOWFALL MEAN & STO 0EV SNOW EPTH DRY BULB, WET BULB, &DEW POINtI RELATIVE HUMIDITY PARTC SURFACE

  17. A study of the application of power-spectral methods of generalized harmonic analysis to gust loads on airplanes

    NASA Technical Reports Server (NTRS)

    Press, Harry; Mazelsky, Bernard

    1954-01-01

    The applicability of some results from the theory of generalized harmonic analysis (or power-spectral analysis) to the analysis of gust loads on airplanes in continuous rough air is examined. The general relations for linear systems between power spectrums of a random input disturbance and an output response are used to relate the spectrum of airplane load in rough air to the spectrum of atmospheric gust velocity. The power spectrum of loads is shown to provide a measure of the load intensity in terms of the standard deviation (root mean square) of the load distribution for an airplane in flight through continuous rough air. For the case of a load output having a normal distribution, which appears from experimental evidence to apply to homogeneous rough air, the standard deviation is shown to describe the probability distribution of loads or the proportion of total time that the load has given values. Thus, for airplane in flight through homogeneous rough air, the probability distribution of loads may be determined from a power-spectral analysis. In order to illustrate the application of power-spectral analysis to gust-load analysis and to obtain an insight into the relations between loads and airplane gust-response characteristics, two selected series of calculations are presented. The results indicate that both methods of analysis yield results that are consistent to a first approximation.

  18. Comparison of different functional EIT approaches to quantify tidal ventilation distribution.

    PubMed

    Zhao, Zhanqi; Yun, Po-Jen; Kuo, Yen-Liang; Fu, Feng; Dai, Meng; Frerichs, Inez; Möller, Knut

    2018-01-30

    The aim of the study was to examine the pros and cons of different types of functional EIT (fEIT) to quantify tidal ventilation distribution in a clinical setting. fEIT images were calculated with (1) standard deviation of pixel time curve, (2) regression coefficients of global and local impedance time curves, or (3) mean tidal variations. To characterize temporal heterogeneity of tidal ventilation distribution, another fEIT image of pixel inspiration times is also proposed. fEIT-regression is very robust to signals with different phase information. When the respiratory signal should be distinguished from the heart-beat related signal, or during high-frequency oscillatory ventilation, fEIT-regression is superior to other types. fEIT-tidal variation is the most stable image type regarding the baseline shift. We recommend using this type of fEIT image for preliminary evaluation of the acquired EIT data. However, all these fEITs would be misleading in their assessment of ventilation distribution in the presence of temporal heterogeneity. The analysis software provided by the currently available commercial EIT equipment only offers either fEIT of standard deviation or tidal variation. Considering the pros and cons of each fEIT type, we recommend embedding more types into the analysis software to allow the physicians dealing with more complex clinical applications with on-line EIT measurements.

  19. The magnetisation distribution of the Ising model - a new approach

    NASA Astrophysics Data System (ADS)

    Hakan Lundow, Per; Rosengren, Anders

    2010-03-01

    A completely new approach to the Ising model in 1 to 5 dimensions is developed. We employ a generalisation of the binomial coefficients to describe the magnetisation distributions of the Ising model. For the complete graph this distribution is exact. For simple lattices of dimensions d=1 and d=5 the magnetisation distributions are remarkably well-fitted by the generalized binomial distributions. For d=4 we are only slightly less successful, while for d=2,3 we see some deviations (with exceptions!) between the generalized binomial and the Ising distribution. The results speak in favour of the generalized binomial distribution's correctness regarding their general behaviour in comparison to the Ising model. A theoretical analysis of the distribution's moments also lends support their being correct asymptotically, including the logarithmic corrections in d=4. The full extent to which they correctly model the Ising distribution, and for which graph families, is not settled though.

  20. Hoeffding Type Inequalities and their Applications in Statistics and Operations Research

    NASA Astrophysics Data System (ADS)

    Daras, Tryfon

    2007-09-01

    Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.

Top