Sample records for large deviation method

  1. Large deviation function for a driven underdamped particle in a periodic potential

    NASA Astrophysics Data System (ADS)

    Fischer, Lukas P.; Pietzonka, Patrick; Seifert, Udo

    2018-02-01

    Employing large deviation theory, we explore current fluctuations of underdamped Brownian motion for the paradigmatic example of a single particle in a one-dimensional periodic potential. Two different approaches to the large deviation function of the particle current are presented. First, we derive an explicit expression for the large deviation functional of the empirical phase space density, which replaces the level 2.5 functional used for overdamped dynamics. Using this approach, we obtain several bounds on the large deviation function of the particle current. We compare these to bounds for overdamped dynamics that have recently been derived, motivated by the thermodynamic uncertainty relation. Second, we provide a method to calculate the large deviation function via the cumulant generating function. We use this method to assess the tightness of the bounds in a numerical case study for a cosine potential.

  2. Transport Coefficients from Large Deviation Functions

    NASA Astrophysics Data System (ADS)

    Gao, Chloe; Limmer, David

    2017-10-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  3. Importance sampling large deviations in nonequilibrium steady states. I.

    PubMed

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  4. Importance sampling large deviations in nonequilibrium steady states. I

    NASA Astrophysics Data System (ADS)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  5. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  6. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  7. Entanglement transitions induced by large deviations

    NASA Astrophysics Data System (ADS)

    Bhosale, Udaysinh T.

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B , is computed analytically using a Coulomb gas method. It is shown that this probability, for large N , goes as exp[-β N2Φ (ζ ) ] , where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ (ζ ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A , using the properties of the density matrix's partial transpose ρ12Γ. The density of states of ρ12Γ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ . Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  8. Entanglement transitions induced by large deviations.

    PubMed

    Bhosale, Udaysinh T

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B, is computed analytically using a Coulomb gas method. It is shown that this probability, for large N, goes as exp[-βN^{2}Φ(ζ)], where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ(ζ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A, using the properties of the density matrix's partial transpose ρ_{12}^{Γ}. The density of states of ρ_{12}^{Γ} is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ. Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  9. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  10. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  11. Evaluation of True Power Luminous Efficiency from Experimental Luminance Values

    NASA Astrophysics Data System (ADS)

    Tsutsui, Tetsuo; Yamamato, Kounosuke

    1999-05-01

    A method for obtaining true external power luminous efficiencyfrom experimentally obtained luminance in organic light-emittingdiodes (LEDs) wasdemonstrated. Conventional two-layer organic LEDs with different electron-transport layer thicknesses wereprepared. Spatial distributions of emission intensities wereobserved. The large deviation in both emission spectra and spatialemission patterns were observed when the electron-transport layerthickness was varied. The deviation of emission patterns from thestandard Lambertian pattern was found to cause overestimations ofpower luminous efficiencies as large as 30%. A method for evaluatingcorrection factors was proposed.

  12. Generic dynamical phase transition in one-dimensional bulk-driven lattice gases with exclusion

    NASA Astrophysics Data System (ADS)

    Lazarescu, Alexandre

    2017-06-01

    Dynamical phase transitions are crucial features of the fluctuations of statistical systems, corresponding to boundaries between qualitatively different mechanisms of maintaining unlikely values of dynamical observables over long periods of time. They manifest themselves in the form of non-analyticities in the large deviation function of those observables. In this paper, we look at bulk-driven exclusion processes with open boundaries. It is known that the standard asymmetric simple exclusion process exhibits a dynamical phase transition in the large deviations of the current of particles flowing through it. That phase transition has been described thanks to specific calculation methods relying on the model being exactly solvable, but more general methods have also been used to describe the extreme large deviations of that current, far from the phase transition. We extend those methods to a large class of models based on the ASEP, where we add arbitrary spatial inhomogeneities in the rates and short-range potentials between the particles. We show that, as for the regular ASEP, the large deviation function of the current scales differently with the size of the system if one considers very high or very low currents, pointing to the existence of a dynamical phase transition between those two regimes: high current large deviations are extensive in the system size, and the typical states associated to them are Coulomb gases, which are highly correlated; low current large deviations do not depend on the system size, and the typical states associated to them are anti-shocks, consistently with a hydrodynamic behaviour. Finally, we illustrate our results numerically on a simple example, and we interpret the transition in terms of the current pushing beyond its maximal hydrodynamic value, as well as relate it to the appearance of Tracy-Widom distributions in the relaxation statistics of such models. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Alexandre Lazarescu was selected by the Editorial Board of J. Phys. A as an Emerging Talent.

  13. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    NASA Astrophysics Data System (ADS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-11-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.

  14. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2012-09-30

    Estimation Methods for Underwater OFDM 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. 6) Asynchronous Multiuser...multi-input multi-output ( MIMO ) OFDM is also pursued, where it is shown that the proposed hybrid initialization enables drastically improved receiver...are investigated. 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. This work studies a distributed system with

  15. Numerical Large Deviation Analysis of the Eigenstate Thermalization Hypothesis

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Toru; Iyoda, Eiki; Sagawa, Takahiro

    2018-05-01

    A plausible mechanism of thermalization in isolated quantum systems is based on the strong version of the eigenstate thermalization hypothesis (ETH), which states that all the energy eigenstates in the microcanonical energy shell have thermal properties. We numerically investigate the ETH by focusing on the large deviation property, which directly evaluates the ratio of athermal energy eigenstates in the energy shell. As a consequence, we have systematically confirmed that the strong ETH is indeed true even for near-integrable systems. Furthermore, we found that the finite-size scaling of the ratio of athermal eigenstates is a double exponential for nonintegrable systems. Our result illuminates the universal behavior of quantum chaos, and suggests that a large deviation analysis would serve as a powerful method to investigate thermalization in the presence of the large finite-size effect.

  16. Approaching sub-50 nanoradian measurements by reducing the saw-tooth deviation of the autocollimator in the Nano-Optic-Measuring Machine

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Geckeler, Ralf D.; Just, Andreas; Idir, Mourad; Wu, Xuehui

    2015-06-01

    Since the development of the Nano-Optic-Measuring Machine (NOM), the accuracy of measuring the profile of an optical surface has been enhanced to the 100-nrad rms level or better. However, to update the accuracy of the NOM system to sub-50 nrad rms, the large saw-tooth deviation (269 nrad rms) of an existing electronic autocollimator, the Elcomat 3000/8, must be resolved. We carried out simulations to assess the saw-tooth-like deviation. We developed a method for setting readings to reduce the deviation to sub-50 nrad rms, suitable for testing plane mirrors. With this method, we found that all the tests conducted in a slowly rising section of the saw-tooth show a small deviation of 28.8 to <40 nrad rms. We also developed a dense-measurement method and an integer-period method to lower the saw-tooth deviation during tests of sphere mirrors. Further research is necessary for formulating a precise test for a spherical mirror. We present a series of test results from our experiments that verify the value of the improvements we made.

  17. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  18. Probability evolution method for exit location distribution

    NASA Astrophysics Data System (ADS)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  19. Large deviations of a long-time average in the Ehrenfest urn model

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  20. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    PubMed

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  1. A framework for the direct evaluation of large deviations in non-Markovian processes

    NASA Astrophysics Data System (ADS)

    Cavallaro, Massimo; Harris, Rosemary J.

    2016-11-01

    We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.

  2. A Large Deviations Analysis of Certain Qualitative Properties of Parallel Tempering and Infinite Swapping Algorithms

    DOE PAGES

    Doll, J.; Dupuis, P.; Nyquist, P.

    2017-02-08

    Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less

  3. Large Deviations and Transitions Between Equilibria for Stochastic Landau-Lifshitz-Gilbert Equation

    NASA Astrophysics Data System (ADS)

    Brzeźniak, Zdzisław; Goldys, Ben; Jegaraj, Terence

    2017-11-01

    We study a stochastic Landau-Lifshitz equation on a bounded interval and with finite dimensional noise. We first show that there exists a pathwise unique solution to this equation and that this solution enjoys the maximal regularity property. Next, we prove the large deviations principle for the small noise asymptotic of solutions using the weak convergence method. An essential ingredient of the proof is the compactness, or weak to strong continuity, of the solution map for a deterministic Landau-Lifschitz equation when considered as a transformation of external fields. We then apply this large deviations principle to show that small noise can cause magnetisation reversal. We also show the importance of the shape anisotropy parameter for reducing the disturbance of the solution caused by small noise. The problem is motivated by applications from ferromagnetic nanowires to the fabrication of magnetic memories.

  4. Large-visual-angle microstructure inspired from quantitative design of Morpho butterflies' lamellae deviation using the FDTD/PSO method.

    PubMed

    Wang, Wanlin; Zhang, Wang; Chen, Weixin; Gu, Jiajun; Liu, Qinglei; Deng, Tao; Zhang, Di

    2013-01-15

    The wide angular range of the treelike structure in Morpho butterfly scales was investigated by finite-difference time-domain (FDTD)/particle-swarm-optimization (PSO) analysis. Using the FDTD method, different parameters in the Morpho butterflies' treelike structure were studied and their contributions to the angular dependence were analyzed. Then a wide angular range was realized by the PSO method from quantitatively designing the lamellae deviation (Δy), which was a crucial parameter with angular range. The field map of the wide-range reflection in a large area was given to confirm the wide angular range. The tristimulus values and corresponding color coordinates for various viewing directions were calculated to confirm the blue color in different observation angles. The wide angular range realized by the FDTD/PSO method will assist us in understanding the scientific principles involved and also in designing artificial optical materials.

  5. Method of surface error visualization using laser 3D projection technology

    NASA Astrophysics Data System (ADS)

    Guo, Lili; Li, Lijuan; Lin, Xuezhu

    2017-10-01

    In the process of manufacturing large components, such as aerospace, automobile and shipping industry, some important mold or stamped metal plate requires precise forming on the surface, which usually needs to be verified, if necessary, the surface needs to be corrected and reprocessed. In order to make the correction of the machined surface more convenient, this paper proposes a method based on Laser 3D projection system, this method uses the contour form of terrain contour, directly showing the deviation between the actually measured data and the theoretical mathematical model (CAD) on the measured surface. First, measure the machined surface to get the point cloud data and the formation of triangular mesh; secondly, through coordinate transformation, unify the point cloud data to the theoretical model and calculate the three-dimensional deviation, according to the sign (positive or negative) and size of the deviation, use the color deviation band to denote the deviation of three-dimensional; then, use three-dimensional contour lines to draw and represent every coordinates deviation band, creating the projection files; finally, import the projection files into the laser projector, and make the contour line projected to the processed file with 1:1 in the form of a laser beam, compare the Full-color 3D deviation map with the projection graph, then, locate and make quantitative correction to meet the processing precision requirements. It can display the trend of the machined surface deviation clearly.

  6. Characterizing Accuracy and Precision of Glucose Sensors and Meters

    PubMed Central

    2014-01-01

    There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194

  7. Rare behavior of growth processes via umbrella sampling of trajectories

    NASA Astrophysics Data System (ADS)

    Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen

    2018-03-01

    We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.

  8. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  9. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  10. Comparison of 13 equations for determining evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Rosenberry, Donald O.; Stannard, David L.; Winter, Thomas C.; Martinez, Margo L.

    2004-01-01

    Evapotranspiration determined using the energy-budget method at a semi-permanent prairie-pothole wetland in east-central North Dakota, USA was compared with 12 other commonly used methods. The Priestley-Taylor and deBruin-Keijman methods compared best with the energy-budget values; mean differences were less than 0.1 mm d−1, and standard deviations were less than 0.3 mm d−1. Both methods require measurement of air temperature, net radiation, and heat storage in the wetland water. The Penman, Jensen-Haise, and Brutsaert-Stricker methods provided the next-best values for evapotranspiration relative to the energy-budget method. The mass-transfer, deBruin, and Stephens-Stewart methods provided the worst comparisons; the mass-transfer and deBruin comparisons with energy-budget values indicated a large standard deviation, and the deBruin and Stephens-Stewart comparisons indicated a large bias. The Jensen-Haise method proved to be cost effective, providing relatively accurate comparisons with the energy-budget method (mean difference=0.44 mm d−1, standard deviation=0.42 mm d−1) and requiring only measurements of air temperature and solar radiation. The Mather (Thornthwaite) method is the simplest, requiring only measurement of air temperature, and it provided values that compared relatively well with energy-budget values (mean difference=0.47 mm d−1, standard deviation=0.56 mm d−1). Modifications were made to several of the methods to make them more suitable for use in prairie wetlands. The modified Makkink, Jensen-Haise, and Stephens-Stewart methods all provided results that were nearly as close to energy-budget values as were the Priestley-Taylor and deBruin-Keijman methods, and all three of these modified methods only require measurements of air temperature and solar radiation. The modified Hamon method provided values that were within 20 percent of energy-budget values during 95 percent of the comparison periods, and it only requires measurement of air temperature. The mass-transfer coefficient, associated with the commonly used mass-transfer method, varied seasonally, with the largest values occurring during summer.

  11. Design and Development of Lateral Flight Director

    NASA Technical Reports Server (NTRS)

    Kudlinski, Kim E.; Ragsdale, William A.

    1999-01-01

    The current control law used for the flight director in the Boeing 737 simulator is inadequate with large localizer deviations near the middle marker. Eight different control laws are investigated. A heuristic method is used to design control laws that meet specific performance criteria. The design of each is described in detail. Several tests were performed and compared with the current control law for the flight director. The goal was to design a control law for the flight director that can be used with large localizer deviations near the middle marker, which could be caused by winds or wake turbulence, without increasing its level of complexity.

  12. Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws

    NASA Astrophysics Data System (ADS)

    Barré, J.; Bernardin, C.; Chetrite, R.

    2018-02-01

    We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.

  13. A Priori Subgrid Analysis of Temporal Mixing Layers with Evaporating Droplets

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    1999-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using three sets of results from a Direct Numerical Simulation (DNS) database, with Reynolds numbers (based on initial vorticity thickness) as large as 600 and with droplet mass loadings as large as 0.5. In the DNS, the gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. The Large Eddy Simulation (LES) equations corresponding to the DNS are first derived, and key assumptions in deriving them are first confirmed by computing the terms using the DNS database. Since LES of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be the sum of the filtered variables and a correction based on the filtered standard deviation; this correction is then computed from the Subgrid Scale (SGS) standard deviation. This model predicts the unfiltered variables at droplet locations considerably better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: the Smagorinsky approach, the Gradient model and the Scale-Similarity formulation. When the proportionality constant inherent in the SGS models is properly calculated, the Gradient and Scale-Similarity methods give results in excellent agreement with the DNS.

  14. From the Law of Large Numbers to Large Deviation Theory in Statistical Physics: An Introduction

    NASA Astrophysics Data System (ADS)

    Cecconi, Fabio; Cencini, Massimo; Puglisi, Andrea; Vergni, Davide; Vulpiani, Angelo

    This contribution aims at introducing the topics of this book. We start with a brief historical excursion on the developments from the law of large numbers to the central limit theorem and large deviations theory. The same topics are then presented using the language of probability theory. Finally, some applications of large deviations theory in physics are briefly discussed through examples taken from statistical mechanics, dynamical and disordered systems.

  15. A method for age-matched OCT angiography deviation mapping in the assessment of disease- related changes to the radial peripapillary capillaries.

    PubMed

    Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y

    2018-01-01

    To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (p<0.01). Average healthy LPV density was 21.5±3.07%. Linear regression models indicated that LPV density decreased with age, however ANOVA and pairwise Tukey-Kramer tests did not reach statistical significance. Deviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.

  16. Large deviations in the presence of cooperativity and slow dynamics

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-06-01

    We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.

  17. Efficiency and large deviations in time-asymmetric stochastic heat engines

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...

    2014-10-24

    In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less

  18. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  19. Finite-Size Scaling of a First-Order Dynamical Phase Transition: Adaptive Population Dynamics and an Effective Model

    NASA Astrophysics Data System (ADS)

    Nemoto, Takahiro; Jack, Robert L.; Lecomte, Vivien

    2017-03-01

    We analyze large deviations of the time-averaged activity in the one-dimensional Fredrickson-Andersen model, both numerically and analytically. The model exhibits a dynamical phase transition, which appears as a singularity in the large deviation function. We analyze the finite-size scaling of this phase transition numerically, by generalizing an existing cloning algorithm to include a multicanonical feedback control: this significantly improves the computational efficiency. Motivated by these numerical results, we formulate an effective theory for the model in the vicinity of the phase transition, which accounts quantitatively for the observed behavior. We discuss potential applications of the numerical method and the effective theory in a range of more general contexts.

  20. River gradient anomalies reveal recent tectonic movements when assuming an exponential gradient decrease along a river course

    NASA Astrophysics Data System (ADS)

    Žibret, Gorazd; Žibret, Lea

    2017-03-01

    High resolution digital models, combined with GIS or other terrain modelling software, allow many new possibilities in geoscience. In this paper we develop, describe and test a novel method, the GLA method, to detect active tectonic uplift or subsidence along river courses. It is a modification of Hack's SL-index method in order to overcome the disadvantages of the latter. The core assumption of the GLA method is that over geological time river profiles quickly adjust to follow an exponential decrease in elevation along the river course. Any large deviation can be attributed to active tectonic movement, or to disturbances in erosion/sedimentation processes caused by an anthropogenic structure (e.g. artificial dam). During the testing phase, the locations of identified deviations were compared to the locations of faults, identified on a 1:100,000 geological map. Results show that higher magnitude deviations are found within a maximum radius of 200 m from the fault, and the majority of detected deviations within a maximum radius of 600 m from faults or thrusts. However, these results are not the best that could be obtained because the geological map that was used (and the only one available for the area) is not of the appropriate scale, and was therefore not precise enough. Comparison of deviation magnitudes against PSInSAR measurements of vertical displacements in the vicinity revealed that in spite of the very few suitable points available, a good correlation between both independent methods was obtained (R2 = 0.68 for the E research area and R2 = 0.69 for the W research area). The GLA method was applied to the three test sites where previous studies have shown active tectonic movements. It shows that deviations occur at the intersections between active faults and river courses, as well as also correctly detected active uplift, attributed to the increased sedimentation rate above an artificial hydropower dam, and an increased erosion rate below. The method gives promising results, and it is acknowledged that the GLA method needs to be tested in other locations around the world.

  1. Hurricane track forecast cones from fluctuations

    PubMed Central

    Meuel, T.; Prado, G.; Seychelles, F.; Bessafi, M.; Kellay, H.

    2012-01-01

    Trajectories of tropical cyclones may show large deviations from predicted tracks leading to uncertainty as to their landfall location for example. Prediction schemes usually render this uncertainty by showing track forecast cones representing the most probable region for the location of a cyclone during a period of time. By using the statistical properties of these deviations, we propose a simple method to predict possible corridors for the future trajectory of a cyclone. Examples of this scheme are implemented for hurricane Ike and hurricane Jimena. The corridors include the future trajectory up to at least 50 h before landfall. The cones proposed here shed new light on known track forecast cones as they link them directly to the statistics of these deviations. PMID:22701776

  2. Simulation of reflecting surface deviations of centimeter-band parabolic space radiotelescope (SRT) with the large-size mirror

    NASA Astrophysics Data System (ADS)

    Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.

    2017-11-01

    he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.

  3. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  4. Analysis of iodinated haloacetic acids in drinking water by reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry with large volume direct aqueous injection.

    PubMed

    Li, Yongtao; Whitaker, Joshua S; McCarty, Christina L

    2012-07-06

    A large volume direct aqueous injection method was developed for the analysis of iodinated haloacetic acids in drinking water by using reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry in the negative ion mode. Both the external and internal standard calibration methods were studied for the analysis of monoiodoacetic acid, chloroiodoacetic acid, bromoiodoacetic acid, and diiodoacetic acid in drinking water. The use of a divert valve technique for the mobile phase solvent delay, along with isotopically labeled analogs used as internal standards, effectively reduced and compensated for the ionization suppression typically caused by coexisting common inorganic anions. Under the optimized method conditions, the mean absolute and relative recoveries resulting from the replicate fortified deionized water and chlorinated drinking water analyses were 83-107% with a relative standard deviation of 0.7-11.7% and 84-111% with a relative standard deviation of 0.8-12.1%, respectively. The method detection limits resulting from the external and internal standard calibrations, based on seven fortified deionized water replicates, were 0.7-2.3 ng/L and 0.5-1.9 ng/L, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Optimal Operation and Management for Smart Grid Subsumed High Penetration of Renewable Energy, Electric Vehicle, and Battery Energy Storage System

    NASA Astrophysics Data System (ADS)

    Shigenobu, Ryuto; Noorzad, Ahmad Samim; Muarapaz, Cirio; Yona, Atsushi; Senjyu, Tomonobu

    2016-04-01

    Distributed generators (DG) and renewable energy sources have been attracting special attention in distribution systems in all over the world. Renewable energies, such as photovoltaic (PV) and wind turbine generators are considered as green energy. However, a large amount of DG penetration causes voltage deviation beyond the statutory range and reverse power flow at interconnection points in the distribution system. If excessive voltage deviation occurs, consumer's electric devices might break and reverse power flow will also has a negative impact on the transmission system. Thus, mass interconnections of DGs has an adverse effect on both of the utility and the customer. Therefore, reactive power control method is proposed previous research by using inverters attached DGs for prevent voltage deviations. Moreover, battery energy storage system (BESS) is also proposed for resolve reverse power flow. In addition, it is possible to supply high quality power for managing DGs and BESSs. Therefore, this paper proposes a method to maintain voltage, active power, and reactive power flow at interconnection points by using cooperative controlled of PVs, house BESSs, EVs, large BESSs, and existing voltage control devices. This paper not only protect distribution system, but also attain distribution loss reduction and effectivity management of control devices. Therefore mentioned control objectives are formulated as an optimization problem that is solved by using the Particle Swarm Optimization (PSO) algorithm. Modified scheduling method is proposed in order to improve convergence probability of scheduling scheme. The effectiveness of the proposed method is verified by case studies results and by using numerical simulations in MATLAB®.

  6. A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2000-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.

  7. Electrostatic Solvation Free Energy of Amino Acid Side Chain Analogs: Implications for the Validity of Electrostatic Linear Response in Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Bin; Pettitt, Bernard M.

    Electrostatic free energies of solvation for 15 neutral amino acid side chain analogs are computed. We compare three methods of varying computational complexity and accuracy for three force fields: free energy simulations, Poisson-Boltzmann (PB), and linear response approximation (LRA) using AMBER, CHARMM, and OPLSAA force fields. We find that deviations from simulation start at low charges for solutes. The approximate PB and LRA produce an overestimation of electrostatic solvation free energies for most of molecules studied here. These deviations are remarkably systematic. The variations among force fields are almost as large as the variations found among methods. Our study confirmsmore » that success of the approximate methods for electrostatic solvation free energies comes from their ability to evaluate free energy differences accurately.« less

  8. Lunar brightness temperature from Microwave Radiometers data of Chang'E-1 and Chang'E-2

    NASA Astrophysics Data System (ADS)

    Feng, J.-Q.; Su, Y.; Zheng, L.; Liu, J.-J.

    2011-10-01

    Both of the Chinese lunar orbiter, Chang'E-1 and Chang'E-2 carried Microwave Radiometers (MRM) to obtain the brightness temperature of the Moon. Based on the different characteristics of these two MRMs, modified algorithms of brightness temperature and specific ground calibration parameters were proposed, and the corresponding lunar global brightness temperature maps were made here. In order to analyze the data distributions of these maps, normalization method was applied on the data series. The second channel data with large deviations were rectified, and the reasons of deviations were analyzed in the end.

  9. Large incidence angle and defocus influence cat's eye retro-reflector

    NASA Astrophysics Data System (ADS)

    Zhang, Lai-xian; Sun, Hua-yan; Zhao, Yan-zhong; Yang, Ji-guang; Zheng, Yong-hui

    2014-11-01

    Cat's eye lens make the laser beam retro-reflected exactly to the opposite direction of the incidence beam, called cat's eye effect, which makes rapid acquiring, tracking and pointing of free space optical communication possible. Study the influence of cat's eye effect to cat's eye retro-reflector at large incidence angle is useful. This paper analyzed the process of how the incidence angle and focal shit affect effective receiving area, retro-reflected beam divergence angle, central deviation of cat's eye retro-reflector at large incidence angle and cat's eye effect factor using geometrical optics method, and presented the analytic expressions. Finally, numerical simulation was done to prove the correction of the study. The result shows that the efficiency receiving area of cat's eye retro-reflector is mainly affected by incidence angle when the focal shift is positive, and it decreases rapidly when the incidence angle increases; the retro-reflected beam divergence and central deviation is mainly affected by focal shift, and within the effective receiving area, the central deviation is smaller than beam divergence in most time, which means the incidence beam can be received and retro-reflected to the other terminal in most time. The cat's eye effect factor gain is affected by both incidence angle and focal shift.

  10. Spectrophotometric Determination of Phenolic Antioxidants in the Presence of Thiols and Proteins.

    PubMed

    Avan, Aslı Neslihan; Demirci Çekiç, Sema; Uzunboy, Seda; Apak, Reşat

    2016-08-12

    Development of easy, practical, and low-cost spectrophotometric methods is required for the selective determination of phenolic antioxidants in the presence of other similar substances. As electron transfer (ET)-based total antioxidant capacity (TAC) assays generally measure the reducing ability of antioxidant compounds, thiols and phenols cannot be differentiated since they are both responsive to the probe reagent. In this study, three of the most common TAC determination methods, namely cupric ion reducing antioxidant capacity (CUPRAC), 2,2'-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt/trolox equivalent antioxidant capacity (ABTS/TEAC), and ferric reducing antioxidant power (FRAP), were tested for the assay of phenolics in the presence of selected thiol and protein compounds. Although the FRAP method is almost non-responsive to thiol compounds individually, surprising overoxidations with large positive deviations from additivity were observed when using this method for (phenols + thiols) mixtures. Among the tested TAC methods, CUPRAC gave the most additive results for all studied (phenol + thiol) and (phenol + protein) mixtures with minimal relative error. As ABTS/TEAC and FRAP methods gave small and large deviations, respectively, from additivity of absorbances arising from these components in mixtures, mercury(II) compounds were added to stabilize the thiol components in the form of Hg(II)-thiol complexes so as to enable selective spectrophotometric determination of phenolic components. This error compensation was most efficient for the FRAP method in testing (thiols + phenols) mixtures.

  11. Cumulants and large deviations of the current through non-equilibrium steady states

    NASA Astrophysics Data System (ADS)

    Bodineau, Thierry; Derrida, Bernard

    2007-06-01

    Using a generalisation of detailed balance for systems maintained out of equilibrium by contact with 2 reservoirs at unequal temperatures or at unequal densities, one can recover the fluctuation theorem for the large deviation function of the current. For large diffusive systems, we show how the large deviation function of the current can be computed using a simple additivity principle. The validity of this additivity principle and the occurrence of phase transitions are discussed in the framework of the macroscopic fluctuation theory. To cite this article: T. Bodineau, B. Derrida, C. R. Physique 8 (2007).

  12. Beyond δ: Tailoring marked statistics to reveal modified gravity

    NASA Astrophysics Data System (ADS)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models which attempt to explain the accelerated expansion of the universe through large-scale modifications to General Relativity (GR), must satisfy the stringent experimental constraints of GR in the solar system. Viable candidates invoke a “screening” mechanism, that dynamically suppresses deviations in high density environments, making their overall detection challenging even for ambitious future large-scale structure surveys. We present methods to efficiently simulate the non-linear properties of such theories, and consider how a series of statistics that reweight the density field to accentuate deviations from GR can be applied to enhance the overall signal-to-noise ratio in differentiating the models from GR. Our results demonstrate that the cosmic density field can yield additional, invaluable cosmological information, beyond the simple density power spectrum, that will enable surveys to more confidently discriminate between modified gravity models and ΛCDM.

  13. An efficient predictor-corrector-based dynamic mesh method for multi-block structured grid with extremely large deformation and its applications

    NASA Astrophysics Data System (ADS)

    Guo, Tongqing; Chen, Hao; Lu, Zhiliang

    2018-05-01

    Aiming at extremely large deformation, a novel predictor-corrector-based dynamic mesh method for multi-block structured grid is proposed. In this work, the dynamic mesh generation is completed in three steps. At first, some typical dynamic positions are selected and high-quality multi-block grids with the same topology are generated at those positions. Then, Lagrange interpolation method is adopted to predict the dynamic mesh at any dynamic position. Finally, a rapid elastic deforming technique is used to correct the small deviation between the interpolated geometric configuration and the actual instantaneous one. Compared with the traditional methods, the results demonstrate that the present method shows stronger deformation ability and higher dynamic mesh quality.

  14. Heterogeneity-induced large deviations in activity and (in some cases) entropy production

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.

    2014-10-01

    We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.

  15. A New Control Paradigm for Stochastic Differential Equations

    NASA Astrophysics Data System (ADS)

    Schmid, Matthias J. A.

    This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.

  16. Spectral theory of extreme statistics in birth-death systems

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch

    2008-03-01

    Statistics of rare events, or large deviations, in chemical reactions and systems of birth-death type have attracted a great deal of interest in many areas of science including cell biochemistry, astrochemistry, epidemiology, population biology, etc. Large deviations become of vital importance when discrete (non-continuum) nature of a population of ``particles'' (molecules, bacteria, cells, animals or even humans) and stochastic character of interactions can drive the population to extinction. I will briefly review the novel spectral method [1-3] for calculating the extreme statistics of a broad class of birth-death processes and reactions involving a single species. The spectral method combines the probability generating function formalism with the Sturm-Liouville theory of linear differential operators. It involves a controlled perturbative treatment based on a natural large parameter of the problem: the average number of particles/individuals in a stationary or metastable state. For extinction (the first passage) problems the method yields accurate results for the extinction statistics and for the quasi-stationary probability distribution, including the tails, of metastable states. I will demonstrate the power of the method on the example of a branching and annihilation reaction, A ->-2.8mm2mm2A,,A ->-2.8mm2mm , representative of a rather general class of processes. *M. Assaf and B. Meerson, Phys. Rev. Lett. 97, 200602 (2006). *M. Assaf and B. Meerson, Phys. Rev. E 74, 041115 (2006). *M. Assaf and B. Meerson, Phys. Rev. E 75, 031122 (2007).

  17. Retrieval of Aerosol Optical Properties from Ground-Based Remote Sensing Measurements: Aerosol Asymmetry Factor and Single Scattering Albedo

    NASA Astrophysics Data System (ADS)

    Qie, L.; Li, Z.; Li, L.; Li, K.; Li, D.; Xu, H.

    2018-04-01

    The Devaux-Vermeulen-Li method (DVL method) is a simple approach to retrieve aerosol optical parameters from the Sun-sky radiance measurements. This study inherited the previous works of retrieving aerosol single scattering albedo (SSA) and scattering phase function, the DVL method was modified to derive aerosol asymmetric factor (g). To assess the algorithm performance at various atmospheric aerosol conditions, retrievals from AERONET observations were implemented, and the results are compared with AERONET official products. The comparison shows that both the DVL SSA and g were well correlated with those of AERONET. The RMSD and the absolute value of MBD deviations between the SSAs are 0.025 and 0.015 respectively, well below the AERONET declared SSA uncertainty of 0.03 for all wavelengths. For asymmetry factor g, the RMSD deviations are smaller than 0.02 and the absolute values of MBDs smaller than 0.01 at 675, 870 and 1020 nm bands. Then, considering several factors probably affecting retrieval quality (i.e. the aerosol optical depth (AOD), the solar zenith angle, and the sky residual error, sphericity proportion and Ångström exponent), the deviations for SSA and g of these two algorithms were calculated at varying value intervals. Both the SSA and g deviations were found decrease with the AOD and the solar zenith angle, and increase with sky residual error. However, the deviations do not show clear sensitivity to the sphericity proportion and Ångström exponent. This indicated that the DVL algorithm is available for both large, non-spherical particles and spherical particles. The DVL results are suitable for the evaluation of aerosol direct radiative effects of different aerosol types.

  18. Big data driven cycle time parallel prediction for production planning in wafer manufacturing

    NASA Astrophysics Data System (ADS)

    Wang, Junliang; Yang, Jungang; Zhang, Jie; Wang, Xiaoxi; Zhang, Wenjun Chris

    2018-07-01

    Cycle time forecasting (CTF) is one of the most crucial issues for production planning to keep high delivery reliability in semiconductor wafer fabrication systems (SWFS). This paper proposes a novel data-intensive cycle time (CT) prediction system with parallel computing to rapidly forecast the CT of wafer lots with large datasets. First, a density peak based radial basis function network (DP-RBFN) is designed to forecast the CT with the diverse and agglomerative CT data. Second, the network learning method based on a clustering technique is proposed to determine the density peak. Third, a parallel computing approach for network training is proposed in order to speed up the training process with large scaled CT data. Finally, an experiment with respect to SWFS is presented, which demonstrates that the proposed CTF system can not only speed up the training process of the model but also outperform the radial basis function network, the back-propagation-network and multivariate regression methodology based CTF methods in terms of the mean absolute deviation and standard deviation.

  19. Estimation of the lower flammability limit of organic compounds as a function of temperature.

    PubMed

    Rowley, J R; Rowley, R L; Wilding, W V

    2011-02-15

    A new method of estimating the lower flammability limit (LFL) of general organic compounds is presented. The LFL is predicted at 298 K for gases and the lower temperature limit for solids and liquids from structural contributions and the ideal gas heat of formation of the fuel. The average absolute deviation from more than 500 experimental data points is 10.7%. In a previous study, the widely used modified Burgess-Wheeler law was shown to underestimate the effect of temperature on the lower flammability limit when determined in a large-diameter vessel. An improved version of the modified Burgess-Wheeler law is presented that represents the temperature dependence of LFL data determined in large-diameter vessels more accurately. When the LFL is estimated at increased temperatures using a combination of this model and the proposed structural-contribution method, an average absolute deviation of 3.3% is returned when compared with 65 data points for 17 organic compounds determined in an ASHRAE-style apparatus. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Data assimilation in the low noise regime

    NASA Astrophysics Data System (ADS)

    Weare, J.; Vanden-Eijnden, E.

    2012-12-01

    On-line data assimilation techniques such as ensemble Kalman filters and particle filters tend to lose accuracy dramatically when presented with an unlikely observation. Such observation may be caused by an unusually large measurement error or reflect a rare fluctuation in the dynamics of the system. Over a long enough span of time it becomes likely that one or several of these events will occur. In some cases they are signatures of the most interesting features of the underlying system and their prediction becomes the primary focus of the data assimilation procedure. The Kuroshio or Black Current that runs along the eastern coast of Japan is an example of just such a system. It undergoes infrequent but dramatic changes of state between a small meander during which the current remains close to the coast of Japan, and a large meander during which the current bulges away from the coast. Because of the important role that the Kuroshio plays in distributing heat and salinity in the surrounding region, prediction of these transitions is of acute interest. { Here we focus on a regime in which both the stochastic forcing on the system and the observational noise are small. In this setting large deviation theory can be used to understand why standard filtering methods fail and guide the design of the more effective data assimilation techniques. Motivated by our large deviations analysis we propose several data assimilation strategies capable of efficiently handling rare events such as the transitions of the Kuroshio. These techniques are tested on a model of the Kuroshio and shown to perform much better than standard filtering methods.Here the sequence of observations (circles) are taken directly from one of our Kuroshio model's transition events from the small meander to the large meander. We tested two new algorithms (Algorithms 3 and 4 in the legend) motivated by our large deviations analysis as well as a standard particle filter and an ensemble Kalman filter. The parameters of each algorithm are chosen so that their costs are comparable. The particle filter and an ensemble Kalman filter fail to accurately track the transition. Algorithms 3 and 4 maintain accuracy (and smaller scale resolution) throughout the transition.

  1. SU-E-J-32: Dosimetric Evaluation Based On Pre-Treatment Cone Beam CT for Spine Stereotactic Body Radiotherapy: Does Region of Interest Focus Matter?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnelli, A; Xia, P

    2015-06-15

    Purpose: Spine stereotactic body radiotherapy requires very conformal dose distributions and precise delivery. Prior to treatment, a KV cone-beam CT (KV-CBCT) is registered to the planning CT to provide image-guided positional corrections, which depend on selection of the region of interest (ROI) because of imperfect patient positioning and anatomical deformation. Our objective is to determine the dosimetric impact of ROI selections. Methods: Twelve patients were selected for this study with the treatment regions varied from C-spine to T-spine. For each patient, the KV-CBCT was registered to the planning CT three times using distinct ROIs: one encompassing the entire patient, amore » large ROI containing large bony anatomy, and a small target-focused ROI. Each registered CBCT volume, saved as an aligned dataset, was then sent to the planning system. The treated plan was applied to each dataset and dose was recalculated. The tumor dose coverage (percentage of target volume receiving prescription dose), maximum point dose to 0.03 cc of the spinal cord, and dose to 10% of the spinal cord volume (V10) for each alignment were compared to the original plan. Results: The average magnitude of tumor coverage deviation was 3.9%±5.8% with external contour, 1.5%±1.1% with large ROI, 1.3%±1.1% with small ROI. Spinal cord V10 deviation from plan was 6.6%±6.6% with external contour, 3.5%±3.1% with large ROI, and 1.2%±1.0% with small ROI. Spinal cord max point dose deviation from plan was: 12.2%±13.3% with external contour, 8.5%±8.4% with large ROI, and 3.7%±2.8% with small ROI. Conclusion: A small ROI focused on the target results in the smallest deviation from planned dose to target and cord although rotations at large distances from the targets were observed. It is recommended that image fusion during CBCT focus narrowly on the target volume to minimize dosimetric error. Improvement in patient setups may further reduce residual errors.« less

  2. Effect of stress on energy flux deviation of ultrasonic waves in GR/EP composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1990-01-01

    Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis (fiber axis) and the x1 for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers a new nondestructive technique of evaluating stress in composites.

  3. Spectrophotometric Determination of Phenolic Antioxidants in the Presence of Thiols and Proteins

    PubMed Central

    Avan, Aslı Neslihan; Demirci Çekiç, Sema; Uzunboy, Seda; Apak, Reşat

    2016-01-01

    Development of easy, practical, and low-cost spectrophotometric methods is required for the selective determination of phenolic antioxidants in the presence of other similar substances. As electron transfer (ET)-based total antioxidant capacity (TAC) assays generally measure the reducing ability of antioxidant compounds, thiols and phenols cannot be differentiated since they are both responsive to the probe reagent. In this study, three of the most common TAC determination methods, namely cupric ion reducing antioxidant capacity (CUPRAC), 2,2′-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt/trolox equivalent antioxidant capacity (ABTS/TEAC), and ferric reducing antioxidant power (FRAP), were tested for the assay of phenolics in the presence of selected thiol and protein compounds. Although the FRAP method is almost non-responsive to thiol compounds individually, surprising overoxidations with large positive deviations from additivity were observed when using this method for (phenols + thiols) mixtures. Among the tested TAC methods, CUPRAC gave the most additive results for all studied (phenol + thiol) and (phenol + protein) mixtures with minimal relative error. As ABTS/TEAC and FRAP methods gave small and large deviations, respectively, from additivity of absorbances arising from these components in mixtures, mercury(II) compounds were added to stabilize the thiol components in the form of Hg(II)-thiol complexes so as to enable selective spectrophotometric determination of phenolic components. This error compensation was most efficient for the FRAP method in testing (thiols + phenols) mixtures. PMID:27529232

  4. Large deviation theory for the kinetics and energetics of turnover of enzyme catalysis in a chemiostatic flow.

    PubMed

    Das, Biswajit; Gangopadhyay, Gautam

    2018-05-07

    In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.

  5. Large deviation theory for the kinetics and energetics of turnover of enzyme catalysis in a chemiostatic flow

    NASA Astrophysics Data System (ADS)

    Das, Biswajit; Gangopadhyay, Gautam

    2018-05-01

    In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.

  6. Interior micro-CT with an offset detector

    PubMed Central

    Sharma, Kriti Sen; Gong, Hao; Ghasemalizadeh, Omid; Yu, Hengyong; Wang, Ge; Cao, Guohua

    2014-01-01

    Purpose: The size of field-of-view (FOV) of a microcomputed tomography (CT) system can be increased by offsetting the detector. The increased FOV is beneficial in many applications. All prior investigations, however, have been focused to the case in which the increased FOV after offset-detector acquisition can cover the transaxial extent of an object fully. Here, the authors studied a new problem where the FOV of a micro-CT system, although increased after offset-detector acquisition, still covers an interior region-of-interest (ROI) within the object. Methods: An interior-ROI-oriented micro-CT scan with an offset detector poses a difficult reconstruction problem, which is caused by both detector offset and projection truncation. Using the projection completion techniques, the authors first extended three previous reconstruction methods from offset-detector micro-CT to offset-detector interior micro-CT. The authors then proposed a novel method which combines two of the extended methods using a frequency split technique. The authors tested the four methods with phantom simulations at 9.4%, 18.8%, 28.2%, and 37.6% detector offset. The authors also applied these methods to physical phantom datasets acquired at the same amounts of detector offset from a customized micro-CT system. Results: When the detector offset was small, all reconstruction methods showed good image quality. At large detector offset, the three extended methods gave either visible shading artifacts or high deviation of pixel value, while the authors’ proposed method demonstrated no visible artifacts and minimal deviation of pixel value in both the numerical simulations and physical experiments. Conclusions: For an interior micro-CT with an offset detector, the three extended reconstruction methods can perform well at a small detector offset but show strong artifacts at a large detector offset. When the detector offset is large, the authors’ proposed reconstruction method can outperform the three extended reconstruction methods by suppressing artifacts and maintaining pixel values. PMID:24877826

  7. Fighter agility metrics, research, and test

    NASA Technical Reports Server (NTRS)

    Liefer, Randall K.; Valasek, John; Eggold, David P.

    1990-01-01

    Proposed new metrics to assess fighter aircraft agility are collected and analyzed. A framework for classification of these new agility metrics is developed and applied. A completed set of transient agility metrics is evaluated with a high fidelity, nonlinear F-18 simulation provided by the NASA Dryden Flight Research Center. Test techniques and data reduction methods are proposed. A method of providing cuing information to the pilot during flight test is discussed. The sensitivity of longitudinal and lateral agility metrics to deviations from the pilot cues is studied in detail. The metrics are shown to be largely insensitive to reasonable deviations from the nominal test pilot commands. Instrumentation required to quantify agility via flight test is also considered. With one exception, each of the proposed new metrics may be measured with instrumentation currently available. Simulation documentation and user instructions are provided in an appendix.

  8. Implicit Incompressible SPH.

    PubMed

    Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias

    2013-07-25

    We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01% can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.

  9. Implicit incompressible SPH.

    PubMed

    Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias

    2014-03-01

    We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01 percent can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.

  10. Orientational alignment in cavity quantum electrodynamics

    NASA Astrophysics Data System (ADS)

    Keeling, Jonathan; Kirton, Peter G.

    2018-05-01

    We consider the orientational alignment of dipoles due to strong matter-light coupling for a nonvanishing density of excitations. We compare various approaches to this problem in the limit of large numbers of emitters and show that direct Monte Carlo integration, mean-field theory, and large deviation methods match exactly in this limit. All three results show that orientational alignment develops in the presence of a macroscopically occupied polariton mode and that the dipoles asymptotically approach perfect alignment in the limit of high density or low temperature.

  11. Wigner time-delay distribution in chaotic cavities and freezing transition.

    PubMed

    Texier, Christophe; Majumdar, Satya N

    2013-06-21

    Using the joint distribution for proper time delays of a chaotic cavity derived by Brouwer, Frahm, and Beenakker [Phys. Rev. Lett. 78, 4737 (1997)], we obtain, in the limit of the large number of channels N, the large deviation function for the distribution of the Wigner time delay (the sum of proper times) by a Coulomb gas method. We show that the existence of a power law tail originates from narrow resonance contributions, related to a (second order) freezing transition in the Coulomb gas.

  12. Large Deviations and Quasipotential for Finite State Mean Field Interacting Particle Systems

    DTIC Science & Technology

    2014-05-01

    The conclusion then follows by applying Lemma 4.4.2. 132 119 4.4.1 Iterative solver: The widest neighborhood structure We employ Gauss - Seidel ...nearest neighborhood structure described in Section 4.4.2. We use Gauss - Seidel iterative method for our numerical experiments. The Gauss - Seidel ...x ∈ Bh, M x ∈ Sh\\Bh, where M ∈ (V,∞) is a very large number, so that the iteration (4.5.1) converges quickly. For simplicity, we restrict our

  13. GPU-Q-J, a fast method for calculating root mean square deviation (RMSD) after optimal superposition

    PubMed Central

    2011-01-01

    Background Calculation of the root mean square deviation (RMSD) between the atomic coordinates of two optimally superposed structures is a basic component of structural comparison techniques. We describe a quaternion based method, GPU-Q-J, that is stable with single precision calculations and suitable for graphics processor units (GPUs). The application was implemented on an ATI 4770 graphics card in C/C++ and Brook+ in Linux where it was 260 to 760 times faster than existing unoptimized CPU methods. Source code is available from the Compbio website http://software.compbio.washington.edu/misc/downloads/st_gpu_fit/ or from the author LHH. Findings The Nutritious Rice for the World Project (NRW) on World Community Grid predicted de novo, the structures of over 62,000 small proteins and protein domains returning a total of 10 billion candidate structures. Clustering ensembles of structures on this scale requires calculation of large similarity matrices consisting of RMSDs between each pair of structures in the set. As a real-world test, we calculated the matrices for 6 different ensembles from NRW. The GPU method was 260 times faster that the fastest existing CPU based method and over 500 times faster than the method that had been previously used. Conclusions GPU-Q-J is a significant advance over previous CPU methods. It relieves a major bottleneck in the clustering of large numbers of structures for NRW. It also has applications in structure comparison methods that involve multiple superposition and RMSD determination steps, particularly when such methods are applied on a proteome and genome wide scale. PMID:21453553

  14. Effect of Stress on Energy Flux Deviation of Ultrasonic Waves in Ultrasonic Waves in GR/EP Composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1990-01-01

    Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis fiber axis) and the x1 axis for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers new nondestructive technique of evaluating stress in composites.

  15. 78 FR 6232 - Energy Conservation Program: Test Procedures for Conventional Cooking Products With Induction...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 69.79 1.59 1.97... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 64.52 0.87 1.08... technology unit % % ( ) % Large A Electric Coil... 1 79.81 1.66 2.06 B Electric........ 1 61.81 2.83 3.52...

  16. A hybrid method with deviational particles for spatial inhomogeneous plasma

    NASA Astrophysics Data System (ADS)

    Yan, Bokai

    2016-03-01

    In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in [31], a Particle in Cell method and a Macro-Micro decomposition method [3] to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. This method is applicable in all regimes and significantly more efficient compared to a PIC-DSMC method near the fluid regime.

  17. Radiotherapy quality assurance report from children's oncology group AHOD0031

    PubMed Central

    Dharmarajan, Kavita V.; Friedman, Debra L.; FitzGerald, T.J.; McCarten, Kathleen M.; Constine, Louis S.; Chen, Lu; Kessel, Sandy K.; Iandoli, Matt; Laurie, Fran; Schwartz, Cindy L.; Wolden, Suzanne L.

    2016-01-01

    Purpose A phase III trial assessing response-based therapy in intermediate-risk Hodgkin lymphoma, mandated real-time central review of involved field radiotherapy and imaging records by a centralized review center to maximize protocol compliance. We report the impact of centralized radiotherapy review upon protocol compliance. Methods Review of simulation films, port films, and dosimetry records was required pre-treatment and after treatment completion. Records were reviewed by study-affiliated or review center-affiliated radiation oncologists. A 6–10% deviation from protocol-specified dose was scored as “minor”; >10% was “major”. A volume deviation was scored as “minor” if margins were less than specified, or “major” if fields transected disease-bearing areas. Interventional review and final compliance review scores were assigned to each radiotherapy case and compared. Results Of 1712 patients enrolled, 1173 underwent IFRT at 256 institutions in 7 countries. An interventional review was performed in 88% and a final review in 98%. Overall, minor and major deviations were found in 12% and 6%, respectively. Among the cases for which ≥ 1 pre-IFRT modification was requested by QARC and subsequently made by the treating institution, 100% were made compliant on final review. In contrast, among the cases for which ≥ 1 modification was requested but not made by the treating institution, 10% were deemed compliant on final review. Conclusion In a large trial with complex treatment pathways and heterogeneous radiotherapy fields, central review was performed in a large percentage of cases pre-IFRT and identified frequent potential deviations in a timely manner. When suggested modifications were performed by the institutions, deviations were almost eliminated. PMID:25670539

  18. Motion-robust intensity-modulated proton therapy for distal esophageal cancer.

    PubMed

    Yu, Jen; Zhang, Xiaodong; Liao, Li; Li, Heng; Zhu, Ronald; Park, Peter C; Sahoo, Narayan; Gillin, Michael; Li, Yupeng; Chang, Joe Y; Komaki, Ritsuko; Lin, Steven H

    2016-03-01

    To develop methods for evaluation and mitigation of dosimetric impact due to respiratory and diaphragmatic motion during free breathing in treatment of distal esophageal cancers using intensity-modulated proton therapy (IMPT). This was a retrospective study on 11 patients with distal esophageal cancer. For each patient, four-dimensional computed tomography (4D CT) data were acquired, and a nominal dose was calculated on the average phase of the 4D CT. The changes of water equivalent thickness (ΔWET) to cover the treatment volume from the peak of inspiration to the valley of expiration were calculated for a full range of beam angle rotation. Two IMPT plans were calculated: one at beam angles corresponding to small ΔWET and one at beam angles corresponding to large ΔWET. Four patients were selected for the calculation of 4D-robustness-optimized IMPT plans due to large motion-induced dose errors generated in conventional IMPT. To quantitatively evaluate motion-induced dose deviation, the authors calculated the lowest dose received by 95% (D95) of the internal clinical target volume for the nominal dose, the D95 calculated on the maximum inhale and exhale phases of 4D CT DCT0 andDCT50 , the 4D composite dose, and the 4D dynamic dose for a single fraction. The dose deviation increased with the average ΔWET of the implemented beams, ΔWETave. When ΔWETave was less than 5 mm, the dose error was less than 1 cobalt gray equivalent based on DCT0 and DCT50 . The dose deviation determined on the basis of DCT0 and DCT50 was proportionally larger than that determined on the basis of the 4D composite dose. The 4D-robustness-optimized IMPT plans notably reduced the overall dose deviation of multiple fractions and the dose deviation caused by the interplay effect in a single fraction. In IMPT for distal esophageal cancer, ΔWET analysis can be used to select the beam angles that are least affected by respiratory and diaphragmatic motion. To further reduce dose deviation, the 4D-robustness optimization can be implemented for IMPT planning. Calculation of DCT0 and DCT50 is a conservative method to estimate the motion-induced dose errors.

  19. Analysis of Different Fragmentation Strategies on a Variety of Large Peptides: Implementation of a Low Level of Theory in Fragment-Based Methods Can Be a Crucial Factor.

    PubMed

    Saha, Arjun; Raghavachari, Krishnan

    2015-05-12

    We have investigated the performance of two classes of fragmentation methods developed in our group (Molecules-in-Molecules (MIM) and Many-Overlapping-Body (MOB) expansion), to reproduce the unfragmented MP2 energies on a test set composed of 10 small to large biomolecules. They have also been assessed to recover the relative energies of different motifs of the acetyl(ala)18NH2 system. Performance of different bond-cutting environments and the use of Hartree-Fock and different density functionals (as a low level of theory) in conjunction with the fragmentation strategies have been analyzed. Our investigation shows that while a low level of theory (for recovering long-range interactions) may not be necessary for small peptides, it provides a very effective strategy to accurately reproduce the total and relative energies of larger peptides such as the different motifs of the acetyl(ala)18NH2 system. Employing M06-2X as the low level of theory, the calculated mean total energy deviation (maximum deviation) in the total MP2 energies for the 10 molecules in the test set at MIM(d=3.5Å), MIM(η=9), and MOB(d=5Å) are 1.16 (2.31), 0.72 (1.87), and 0.43 (2.02) kcal/mol, respectively. The excellent performance suggests that such fragment-based methods should be of general use for the computation of accurate energies of large biomolecular systems.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, A.; Davis, A.; University of Wisconsin-Madison, Madison, WI 53706

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise tomore » extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)« less

  1. Dispersion in Rectangular Networks: Effective Diffusivity and Large-Deviation Rate Function

    NASA Astrophysics Data System (ADS)

    Tzella, Alexandra; Vanneste, Jacques

    2016-09-01

    The dispersion of a diffusive scalar in a fluid flowing through a network has many applications including to biological flows, porous media, water supply, and urban pollution. Motivated by this, we develop a large-deviation theory that predicts the evolution of the concentration of a scalar released in a rectangular network in the limit of large time t ≫1 . This theory provides an approximation for the concentration that remains valid for large distances from the center of mass, specifically for distances up to O (t ) and thus much beyond the O (t1 /2) range where a standard Gaussian approximation holds. A byproduct of the approach is a closed-form expression for the effective diffusivity tensor that governs this Gaussian approximation. Monte Carlo simulations of Brownian particles confirm the large-deviation results and demonstrate their effectiveness in describing the scalar distribution when t is only moderately large.

  2. A General Conditional Large Deviation Principle

    DOE PAGES

    La Cour, Brian R.; Schieve, William C.

    2015-07-18

    Given a sequence of Borel probability measures on a Hausdorff space which satisfy a large deviation principle (LDP), we consider the corresponding sequence of measures formed by conditioning on a set B. If the large deviation rate function I is good and effectively continuous, and the conditioning set has the property that (1)more » $$\\overline{B°}$$=$$\\overline{B}$$ and (2) I(x)<∞ for all xε$$\\overline{B}$$, then the sequence of conditional measures satisfies a LDP with the good, effectively continuous rate function I B, where I B(x)=I(x)-inf I(B) if xε$$\\overline{B}$$ and I B(x)=∞ otherwise.« less

  3. Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image

    NASA Astrophysics Data System (ADS)

    Demir, N.; Kaynarca, M.; Oy, S.

    2016-06-01

    Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.

  4. Large Deviations: Advanced Probability for Undergrads

    ERIC Educational Resources Information Center

    Rolls, David A.

    2007-01-01

    In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…

  5. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE PAGES

    Dupuis, Paul; Johnson, Dane

    2017-11-17

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  6. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Paul; Johnson, Dane

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  7. Incorporating Active Runway Crossings in Airport Departure Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2010-01-01

    A mixed integer linear program is presented for deterministically scheduling departure and ar rival aircraft at airport runways. This method addresses different schemes of managing the departure queuing area by treating it as first-in-first-out queues or as a simple par king area where any available aircraft can take-off ir respective of its relative sequence with others. In addition, this method explicitly considers separation criteria between successive aircraft and also incorporates an optional prioritization scheme using time windows. Multiple objectives pertaining to throughput and system delay are used independently. Results indicate improvement over a basic first-come-first-serve rule in both system delay and throughput. Minimizing system delay results in small deviations from optimal throughput, whereas minimizing throughput results in large deviations in system delay. Enhancements for computational efficiency are also presented in the form of reformulating certain constraints and defining additional inequalities for better bounds.

  8. Reducing the standard deviation in multiple-assay experiments where the variation matters but the absolute value does not.

    PubMed

    Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A

    2013-01-01

    When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.

  9. A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan

    NASA Astrophysics Data System (ADS)

    Bhongade, A. S.; Khodke, P. M.

    2014-04-01

    Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.

  10. Evaluation of Large-scale Data to Detect Irregularity in Payment for Medical Services. An Extended Use of Benford's Law.

    PubMed

    Park, Junghyun A; Kim, Minki; Yoon, Seokjoon

    2016-05-17

    Sophisticated anti-fraud systems for the healthcare sector have been built based on several statistical methods. Although existing methods have been developed to detect fraud in the healthcare sector, these algorithms consume considerable time and cost, and lack a theoretical basis to handle large-scale data. Based on mathematical theory, this study proposes a new approach to using Benford's Law in that we closely examined the individual-level data to identify specific fees for in-depth analysis. We extended the mathematical theory to demonstrate the manner in which large-scale data conform to Benford's Law. Then, we empirically tested its applicability using actual large-scale healthcare data from Korea's Health Insurance Review and Assessment (HIRA) National Patient Sample (NPS). For Benford's Law, we considered the mean absolute deviation (MAD) formula to test the large-scale data. We conducted our study on 32 diseases, comprising 25 representative diseases and 7 DRG-regulated diseases. We performed an empirical test on 25 diseases, showing the applicability of Benford's Law to large-scale data in the healthcare industry. For the seven DRG-regulated diseases, we examined the individual-level data to identify specific fees to carry out an in-depth analysis. Among the eight categories of medical costs, we considered the strength of certain irregularities based on the details of each DRG-regulated disease. Using the degree of abnormality, we propose priority action to be taken by government health departments and private insurance institutions to bring unnecessary medical expenses under control. However, when we detect deviations from Benford's Law, relatively high contamination ratios are required at conventional significance levels.

  11. Estimate of standard deviation for a log-transformed variable using arithmetic means and standard deviations.

    PubMed

    Quan, Hui; Zhang, Ji

    2003-09-15

    Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.

  12. Research on volume metrology method of large vertical energy storage tank based on internal electro-optical distance-ranging method

    NASA Astrophysics Data System (ADS)

    Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang

    2018-01-01

    A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.

  13. iATTRACT: simultaneous global and local interface optimization for protein-protein docking refinement.

    PubMed

    Schindler, Christina E M; de Vries, Sjoerd J; Zacharias, Martin

    2015-02-01

    Protein-protein interactions are abundant in the cell but to date structural data for a large number of complexes is lacking. Computational docking methods can complement experiments by providing structural models of complexes based on structures of the individual partners. A major caveat for docking success is accounting for protein flexibility. Especially, interface residues undergo significant conformational changes upon binding. This limits the performance of docking methods that keep partner structures rigid or allow limited flexibility. A new docking refinement approach, iATTRACT, has been developed which combines simultaneous full interface flexibility and rigid body optimizations during docking energy minimization. It employs an atomistic molecular mechanics force field for intermolecular interface interactions and a structure-based force field for intramolecular contributions. The approach was systematically evaluated on a large protein-protein docking benchmark, starting from an enriched decoy set of rigidly docked protein-protein complexes deviating by up to 15 Å from the native structure at the interface. Large improvements in sampling and slight but significant improvements in scoring/discrimination of near native docking solutions were observed. Complexes with initial deviations at the interface of up to 5.5 Å were refined to significantly better agreement with the native structure. Improvements in the fraction of native contacts were especially favorable, yielding increases of up to 70%. © 2014 Wiley Periodicals, Inc.

  14. Deviation of landmarks in accordance with methods of establishing reference planes in three-dimensional facial CT evaluation.

    PubMed

    Yoon, Kaeng Won; Yoon, Suk-Ja; Kang, Byung-Cheol; Kim, Young-Hee; Kook, Min Suk; Lee, Jae-Seo; Palomo, Juan Martin

    2014-09-01

    This study aimed to investigate the deviation of landmarks from horizontal or midsagittal reference planes according to the methods of establishing reference planes. Computed tomography (CT) scans of 18 patients who received orthodontic and orthognathic surgical treatment were reviewed. Each CT scan was reconstructed by three methods for establishing three orthogonal reference planes (namely, the horizontal, midsagittal, and coronal reference planes). The horizontal (bilateral porions and bilateral orbitales) and midsagittal (crista galli, nasion, prechiasmatic point, opisthion, and anterior nasal spine) landmarks were identified on each CT scan. Vertical deviation of the horizontal landmarks and horizontal deviation of the midsagittal landmarks were measured. The porion and orbitale, which were not involved in establishing the horizontal reference plane, were found to deviate vertically from the horizontal reference plane in the three methods. The midsagittal landmarks, which were not used for the midsagittal reference plane, deviated horizontally from the midsagittal reference plane in the three methods. In a three-dimensional facial analysis, the vertical and horizontal deviations of the landmarks from the horizontal and midsagittal reference planes could vary depending on the methods of establishing reference planes.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu, H; Yu, N; Qi, P

    Purpose: In commercial secondary dose calculation system, an average effective depth is used to calculate the Monitor Units for an arc beam from the volumetric modulated arc (VMAT) plans. Typically, an arithmetic mean of the effective depths (AMED) of a VMAT arc beam is used, which may result in large MU discrepancy from that of the primary treatment planning system. This study is to demonstrate the use of a dose weighted mean effective depth (DWED) can improve accuracy of MU calculation for the secondary MU verification. Methods: In-house scripts were written in the primary treatment planning system (TPS) to firstmore » convert a VMAT arc beam to a series of static step & shoot beams (every 4 degree). The computed dose and effective depth of each static beam were then used to obtain the dose weighted mean effective depth (DWED) for the VMAT beam. The DWED was used for the secondary MU calculation for VMAT plans. Six lung SBRT VMAT plans, eight head and neck VMAT plans and ten prostate VMAT plans that had > 5% MU deviations (failed MU verification) using the AMED method were recalculated with the DWED. For comparison, same number VMAT plans that had < 5% MU deviations (passed MU verification) using AMED method were also reevaluated with the dose weighted mean effective depth method. Results: For MU verification passed plans, the mean and standard deviation of MU differences between the TPS and the secondary calculation program were 2.2%±1.5% for the AMED and 2.1%±1.7% for the DMED method. For the failed plans, the mean and standard deviation of MU differences of TPS to the secondary calculation program were 9.9%±4.7% and 4.7%±2.6, respectively. Conclusion: The dose weighted mean effective depth improved MU calculation accuracy which can be used for the pre-treatment MU verification of VMAT plans.« less

  16. Large Deviations for Nonlocal Stochastic Neural Fields

    PubMed Central

    2014-01-01

    We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297

  17. Reliable detection of fluence anomalies in EPID-based IMRT pretreatment quality assurance using pixel intensity deviations

    PubMed Central

    Gordon, J. J.; Gardner, J. K.; Wang, S.; Siebers, J. V.

    2012-01-01

    Purpose: This work uses repeat images of intensity modulated radiation therapy (IMRT) fields to quantify fluence anomalies (i.e., delivery errors) that can be reliably detected in electronic portal images used for IMRT pretreatment quality assurance. Methods: Repeat images of 11 clinical IMRT fields are acquired on a Varian Trilogy linear accelerator at energies of 6 MV and 18 MV. Acquired images are corrected for output variations and registered to minimize the impact of linear accelerator and electronic portal imaging device (EPID) positioning deviations. Detection studies are performed in which rectangular anomalies of various sizes are inserted into the images. The performance of detection strategies based on pixel intensity deviations (PIDs) and gamma indices is evaluated using receiver operating characteristic analysis. Results: Residual differences between registered images are due to interfraction positional deviations of jaws and multileaf collimator leaves, plus imager noise. Positional deviations produce large intensity differences that degrade anomaly detection. Gradient effects are suppressed in PIDs using gradient scaling. Background noise is suppressed using median filtering. In the majority of images, PID-based detection strategies can reliably detect fluence anomalies of ≥5% in ∼1 mm2 areas and ≥2% in ∼20 mm2 areas. Conclusions: The ability to detect small dose differences (≤2%) depends strongly on the level of background noise. This in turn depends on the accuracy of image registration, the quality of the reference image, and field properties. The longer term aim of this work is to develop accurate and reliable methods of detecting IMRT delivery errors and variations. The ability to resolve small anomalies will allow the accuracy of advanced treatment techniques, such as image guided, adaptive, and arc therapies, to be quantified. PMID:22894421

  18. Effects of expected-value information and display format on recognition of aircraft subsystem abnormalities

    NASA Technical Reports Server (NTRS)

    Palmer, Michael T.; Abbott, Kathy H.

    1994-01-01

    This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.

  19. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  20. Testing the equivalence principle on cosmological scales

    NASA Astrophysics Data System (ADS)

    Bonvin, Camille; Fleury, Pierre

    2018-05-01

    The equivalence principle, that is one of the main pillars of general relativity, is very well tested in the Solar system; however, its validity is more uncertain on cosmological scales, or when dark matter is concerned. This article shows that relativistic effects in the large-scale structure can be used to directly test whether dark matter satisfies Euler's equation, i.e. whether its free fall is characterised by geodesic motion, just like baryons and light. After having proposed a general parametrisation for deviations from Euler's equation, we perform Fisher-matrix forecasts for future surveys like DESI and the SKA, and show that such deviations can be constrained with a precision of order 10%. Deviations from Euler's equation cannot be tested directly with standard methods like redshift-space distortions and gravitational lensing, since these observables are not sensitive to the time component of the metric. Our analysis shows therefore that relativistic effects bring new and complementary constraints to alternative theories of gravity.

  1. An adaptive beamforming method for ultrasound imaging based on the mean-to-standard-deviation factor.

    PubMed

    Wang, Yuanguo; Zheng, Chichao; Peng, Hu; Chen, Qiang

    2018-06-12

    The beamforming performance has a large impact on image quality in ultrasound imaging. Previously, several adaptive weighting factors including coherence factor (CF) and generalized coherence factor (GCF) have been proposed to improved image resolution and contrast. In this paper, we propose a new adaptive weighting factor for ultrasound imaging, which is called signal mean-to-standard-deviation factor (SMSF). SMSF is defined as the mean-to-standard-deviation of the aperture data and is used to weight the output of delay-and-sum (DAS) beamformer before image formation. Moreover, we develop a robust SMSF (RSMSF) by extending the SMSF to the spatial frequency domain using an altered spectrum of the aperture data. In addition, a square neighborhood average is applied on the RSMSF to offer a more smoothed square neighborhood RSMSF (SN-RSMSF) value. We compared our methods with DAS, CF, and GCF using simulated and experimental synthetic aperture data sets. The quantitative results show that SMSF results in an 82% lower full width at half-maximum (FWHM) but a 12% lower contrast ratio (CR) compared with CF. Moreover, the SN-RSMSF leads to 15% and 10% improvement, on average, in FWHM and CR compared with GCF while maintaining the speckle quality. This demonstrates that the proposed methods can effectively improve the image resolution and contrast. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. The large deviation function for entropy production: the optimal trajectory and the role of fluctuations

    NASA Astrophysics Data System (ADS)

    Speck, Thomas; Engel, Andreas; Seifert, Udo

    2012-12-01

    We study the large deviation function for the entropy production rate in two driven one-dimensional systems: the asymmetric random walk on a discrete lattice and Brownian motion in a continuous periodic potential. We compare two approaches: using the Donsker-Varadhan theory and using the Freidlin-Wentzell theory. We show that the wings of the large deviation function are dominated by a single optimal trajectory: either in the forward direction (positive rate) or in the backward direction (negative rate). The joining of the two branches at zero entropy production implies a non-differentiability and thus the appearance of a ‘kink’. However, around zero entropy production, many trajectories contribute and thus the ‘kink’ is smeared out.

  3. Locality and nonlocality of classical restrictions of quantum spin systems with applications to quantum large deviations and entanglement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Roeck, W., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Maes, C., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Schütz, M., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be

    2015-02-15

    We study the projection on classical spins starting from quantum equilibria. We show Gibbsianness or quasi-locality of the resulting classical spin system for a class of gapped quantum systems at low temperatures including quantum ground states. A consequence of Gibbsianness is the validity of a large deviation principle in the quantum system which is known and here recovered in regimes of high temperature or for thermal states in one dimension. On the other hand, we give an example of a quantum ground state with strong nonlocality in the classical restriction, giving rise to what we call measurement induced entanglement andmore » still satisfying a large deviation principle.« less

  4. Hoeffding Type Inequalities and their Applications in Statistics and Operations Research

    NASA Astrophysics Data System (ADS)

    Daras, Tryfon

    2007-09-01

    Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.

  5. Upper mantle anisotropy from long-period P polarization

    NASA Astrophysics Data System (ADS)

    Schulte-Pelkum, Vera; Masters, Guy; Shearer, Peter M.

    2001-10-01

    We introduce a method to infer upper mantle azimuthal anisotropy from the polarization, i.e., the direction of particle motion, of teleseismic long-period P onsets. The horizontal polarization of the initial P particle motion can deviate by >10° from the great circle azimuth from station to source despite a high degree of linearity of motion. Recent global isotropic three-dimensional mantle models predict effects that are an order of magnitude smaller than our observations. Stations within regional distances of each other show consistent azimuthal deviation patterns, while the deviations seem to be independent of source depth and near-source structure. We demonstrate that despite this receiver-side spatial coherence, our polarization data cannot be fit by a large-scale joint inversion for whole mantle structure. However, they can be reproduced by azimuthal anisotropy in the upper mantle and crust. Modeling with an anisotropic reflectivity code provides bounds on the magnitude and depth range of the anisotropy manifested in our data. Our method senses anisotropy within one wavelength (250 km) under the receiver. We compare our inferred fast directions of anisotropy to those obtained from Pn travel times and SKS splitting. The results of the comparison are consistent with azimuthal anisotropy situated in the uppermost mantle, with SKS results deviating from Pn and Ppol in some regions with probable additional deeper anisotropy. Generally, our fast directions are consistent with anisotropic alignment due to lithospheric deformation in tectonically active regions and to absolute plate motion in shield areas. Our data provide valuable additional constraints in regions where discrepancies between results from different methods exist since the effect we observe is local rather than cumulative as in the case of travel time anisotropy and shear wave splitting. Additionally, our measurements allow us to identify stations with incorrectly oriented horizontal components.

  6. Minimization of deviations of gear real tooth surfaces determined by coordinate measurements

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Kuan, C.; Wang, J.-C.; Handschuh, R. F.; Masseth, J.; Maruyama, N.

    1992-01-01

    The deviations of a gear's real tooth surface from the theoretical surface are determined by coordinate measurements at the grid of the surface. A method was developed to transform the deviations from Cartesian coordinates to those along the normal at the measurement locations. Equations are derived that relate the first order deviations with the adjustment to the manufacturing machine-tool settings. The deviations of the entire surface are minimized. The minimization is achieved by application of the least-square method for an overdetermined system of linear equations. The proposed method is illustrated with a numerical example for hypoid gear and pinion.

  7. Beyond multi-fractals: surrogate time series and fields

    NASA Astrophysics Data System (ADS)

    Venema, V.; Simmer, C.

    2007-12-01

    Most natural complex are characterised by variability on a large range of temporal and spatial scales. The two main methodologies to generate such structures are Fourier/FARIMA based algorithms and multifractal methods. The former is restricted to Gaussian data, whereas the latter requires the structure to be self-similar. This work will present so-called surrogate data as an alternative that works with any (empirical) distribution and power spectrum. The best-known surrogate algorithm is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm. We have studied six different geophysical time series (two clouds, runoff of a small and a large river, temperature and rain) and their surrogates. The power spectra and consequently the 2nd order structure functions were replicated accurately. Even the fourth order structure function was more accurately reproduced by the surrogates as would be possible by a fractal method, because the measured structure deviated too strong from fractal scaling. Only in case of the daily rain sums a fractal method could have been more accurate. Just as Fourier and multifractal methods, the current surrogates are not able to model the asymmetric increment distributions observed for runoff, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found differences for the structure functions on small scales. Surrogate methods are especially valuable for empirical studies, because the time series and fields that are generated are able to mimic measured variables accurately. Our main application is radiative transfer through structured clouds. Like many geophysical fields, clouds can only be sampled sparsely, e.g. with in-situ airborne instruments. However, for radiative transfer calculations we need full 3-dimensional cloud fields. A first study relating the measured properties of the cloud droplets and the radiative properties of the cloud field by generating surrogate cloud fields yielded good results within the measurement error. A further test of the suitability of the surrogate clouds for radiative transfer is evaluated by comparing the radiative properties of model cloud fields of sparse cumulus and stratocumulus with their surrogate fields. The bias and root mean square error in various radiative properties is small and the deviations in the radiances and irradiances are not statistically significant, i.e. these deviations can be attributed to the Monte Carlo noise of the radiative transfer calculations. We compared these results with optical properties of synthetic clouds that have either the correct distribution (but no spatial correlations) or the correct power spectrum (but a Gaussian distribution). These clouds did show statistical significant deviations. For more information see: http://www.meteo.uni-bonn.de/venema/themes/surrogates/

  8. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Rogue waves and large deviations in deep sea.

    PubMed

    Dematteis, Giovanni; Grafke, Tobias; Vanden-Eijnden, Eric

    2018-01-30

    The appearance of rogue waves in deep sea is investigated by using the modified nonlinear Schrödinger (MNLS) equation in one spatial dimension with random initial conditions that are assumed to be normally distributed, with a spectrum approximating realistic conditions of a unidirectional sea state. It is shown that one can use the incomplete information contained in this spectrum as prior and supplement this information with the MNLS dynamics to reliably estimate the probability distribution of the sea surface elevation far in the tail at later times. Our results indicate that rogue waves occur when the system hits unlikely pockets of wave configurations that trigger large disturbances of the surface height. The rogue wave precursors in these pockets are wave patterns of regular height, but with a very specific shape that is identified explicitly, thereby allowing for early detection. The method proposed here combines Monte Carlo sampling with tools from large deviations theory that reduce the calculation of the most likely rogue wave precursors to an optimization problem that can be solved efficiently. This approach is transferable to other problems in which the system's governing equations contain random initial conditions and/or parameters.

  10. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  11. Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger

    2018-05-01

    In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.

  12. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  13. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  14. Endometrioid adenocarcinoma of the uterus with a minimal deviation invasive pattern.

    PubMed

    Landry, D; Mai, K T; Senterman, M K; Perkins, D G; Yazdi, H M; Veinot, J P; Thomas, J

    2003-01-01

    Minimal deviation adenocarcinoma of endometrioid type is a rare pathological entity. We describe a variant of typical endometrioid adenocarcinoma associated with minimal deviation adenocarcinoma of endometrioid type. One 'pilot' case of minimal deviation adenocarcinoma of endometrioid type associated with typical endometrioid adenocarcinoma was encountered at our institution in 2001. A second case of same type was received in consultation. We reviewed 168 consecutive hysterectomy specimens diagnosed with 'endometrioid adenocarcinoma' specifically to identify areas of minimal deviation adenocarcinoma of endometrioid type. Immunohistochemistry was done with the following antibodies: MIB1, p53, oestrogen receptor (ER), progesterone receptor (PR), cytokeratin 7 (CK7), cytokeratin 20 (CK20), carcinoembryonic antigen (CEA), and vimentin (VIM). Four additional cases of minimal deviation adenocarcinoma of endometrioid type were identified. All six cases of minimal deviation adenocarcinoma of endometrioid type were associated with superficial endometrioid adenocarcinoma. In two cases with a large amount of minimal deviation adenocarcinoma of endometrioid type, the cervix was involved. The immunoprofile of two representative cases was ER+, PR+, CK7+, CK20-, CEA-, VIM+. MIB1 immunostaining of four cases revealed little proliferative activity of the minimal deviation adenocarcinoma of endometrioid type glandular cells (0-1%) compared with the associated 'typical' endometrioid adenocarcinoma (20-30%). The same four cases showed no p53 immunostaining in minimal deviation adenocarcinoma of endometrioid type compared with a range of positive staining in the associated endometrioid adenocarcinoma. Minimal deviation adenocarcinoma of endometrioid type more often develops as a result of differentiation from typical endometrioid adenocarcinoma than de novo. Due to its deceptively benign microscopic appearance, minimal deviation adenocarcinoma of endometrioid type may be overlooked and may lead to incorrect assessment of tumour depth and pathological stage. There was a tendency for tumour with a large amount of minimal deviation adenocarcinoma of endometrioid type to invade the cervix.

  15. Evidence for single top-quark production in the s-channel in proton–proton collisions at √s = 8TeV with the ATLAS detector using the Matrix Element Method

    DOE PAGES

    Aad, G.

    2016-03-08

    This Letter presents evidence for single top-quark production in the s-channel using proton–proton collisions at a centre-of-mass energy of 8 TeV with the ATLAS detector at the CERN Large Hadron Collider. The analysis is performed on events containing one isolated electron or muon, large missing transverse momentum and exactly two b-tagged jets in the final state. The analysed data set corresponds to an integrated luminosity of 20.3 fb -1. The signal is extracted using a maximum-likelihood fit of a discriminant which is based on the matrix element method and optimized in order to separate single-top-quark s-channel events from the mainmore » background contributions, which are top-quark pair production and W boson production in association with heavy-flavour jets. The measurement leads to an observed signal significance of 3.2 standard deviations and a measured cross-section of σ s = 4.8 ± 0.8(stat.) -1.3 +1.6(syst.) pb, which is consistent with the Standard Model expectation. As a result, the expected significance for the analysis is 3.9 standard deviations.« less

  16. Sampling rare fluctuations of discrete-time Markov chains

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  17. Sampling rare fluctuations of discrete-time Markov chains.

    PubMed

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  18. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  19. Adaptive Gain-based Stable Power Smoothing of a DFIG

    DOE PAGES

    Muljadi, Eduard; Lee, Hyewon; Hwang, Min; ...

    2017-11-01

    In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  20. Adaptive Gain-based Stable Power Smoothing of a DFIG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Lee, Hyewon; Hwang, Min

    In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  1. The power grid AGC frequency bias coefficient online identification method based on wide area information

    NASA Astrophysics Data System (ADS)

    Wang, Zian; Li, Shiguang; Yu, Ting

    2015-12-01

    This paper propose online identification method of regional frequency deviation coefficient based on the analysis of interconnected grid AGC adjustment response mechanism of regional frequency deviation coefficient and the generator online real-time operation state by measured data through PMU, analyze the optimization method of regional frequency deviation coefficient in case of the actual operation state of the power system and achieve a more accurate and efficient automatic generation control in power system. Verify the validity of the online identification method of regional frequency deviation coefficient by establishing the long-term frequency control simulation model of two-regional interconnected power system.

  2. [A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].

    PubMed

    Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y

    2016-04-18

    To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.

  3. Merge measuring mesh for complex surface parts

    NASA Astrophysics Data System (ADS)

    Ye, Jianhua; Gao, Chenghui; Zeng, Shoujin; Xu, Mingsan

    2018-04-01

    Due to most parts self-occlude and limitation of scanner range, it is difficult to scan the entire part by one time. For modeling of part, multi measuring meshes need to be merged. In this paper, a new merge method is presented. At first, using the grid voxelization method to eliminate the most of non-overlap regions, and retrieval overlap triangles method by the topology of mesh is proposed due to its ability to improve the efficiency. Then, to remove the large deviation of overlap triangles, deleting by overlap distance is discussion. After that, this paper puts forward a new method of merger meshes by registration and combination mesh boundary point. Through experimental analysis, the suggested methods are effective.

  4. Volumetric segmentation of ADC maps and utility of standard deviation as measure of tumor heterogeneity in soft tissue tumors.

    PubMed

    Singer, Adam D; Pattany, Pradip M; Fayad, Laura M; Tresley, Jonathan; Subhawong, Ty K

    2016-01-01

    Determine interobserver concordance of semiautomated three-dimensional volumetric and two-dimensional manual measurements of apparent diffusion coefficient (ADC) values in soft tissue masses (STMs) and explore standard deviation (SD) as a measure of tumor ADC heterogeneity. Concordance correlation coefficients for mean ADC increased with more extensive sampling. Agreement on the SD of tumor ADC values was better for large regions of interest and multislice methods. Correlation between mean and SD ADC was low, suggesting that these parameters are relatively independent. Mean ADC of STMs can be determined by volumetric quantification with high interobserver agreement. STM heterogeneity merits further investigation as a potential imaging biomarker that complements other functional magnetic resonance imaging parameters. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Far-ultraviolet refractive index of optical materials for solar blind channel (SBC) filters for the HST advanced camera for surveys (ACS)

    NASA Astrophysics Data System (ADS)

    Leviton, Douglas B.; Madison, Timothy J.; Petrone, Peter

    1998-10-01

    Refractive index measurements using the minimum deviation method have been carried out for prisms of a variety of far ultraviolet optical materials used in the manufacture of Solar Blind Channel (SBC) filters for the HST Advanced Camera for Surveys (ACS). Some of the materials measured are gaining popularity in a variety of high technology applications including high power excimer lasers and advanced microlithography optics operating in a wavelength region where high quality knowledge of optical material properties is sparse yet critical. Our measurements are of unusually high accuracy and precision for this wavelength region owing to advanced instrumentation in the large vacuum chamber of the Diffraction Grating Evaluation Facility (DGEF) at Goddard Space Flight Center (GSFC) used to implement a minimum deviation method refractometer. Index values for CaF2, BaF2, LiF, and far ultraviolet grades of synthetic sapphire and synthetic fused silica are reported and compared with values from the literature.

  6. Finding new pathway-specific regulators by clustering method using threshold standard deviation based on DNA chip data of Streptomyces coelicolor.

    PubMed

    Yang, Yung-Hun; Kim, Ji-Nu; Song, Eunjung; Kim, Eunjung; Oh, Min-Kyu; Kim, Byung-Gee

    2008-09-01

    In order to identify the regulators involved in antibiotic production or time-specific cellular events, the messenger ribonucleic acid (mRNA) expression data of the two gene clusters, actinorhodin (ACT) and undecylprodigiosin (RED) biosynthetic genes, were clustered with known mRNA expression data of regulators from S. coelicolor using a filtering method based on standard deviation and clustering analysis. The result identified five regulators including two well-known regulators namely, SCO3579 (WlbA) and SCO6722 (SsgD). Using overexpression and deletion of the regulator genes, we were able to identify two regulators, i.e., SCO0608 and SCO6808, playing roles as repressors in antibiotics production and sporulation. This approach can be easily applied to mapping out new regulators related to any interesting target gene clusters showing characteristic expression patterns. The result can also be used to provide insightful information on the selection rules among a large number of regulators.

  7. Predictor symbology in computer-generated pictorial displays

    NASA Technical Reports Server (NTRS)

    Grunwald, A. J.

    1981-01-01

    The display under investigation, is a tunnel display for the four-dimensional commercial aircraft approach-to-landing under instrument flight rules. It is investigated whether more complex predictive information such as a three-dimensional perspective vehicle symbol, predicting the future vehicle position as well as future vehicle attitude angles, contributes to a better system response, and suitable predictor laws for the predictor motions, are formulated. Methods for utilizing the predictor symbol in controlling the forward velocity of the aircraft in four-dimensional approaches, are investigated. The simulator tests show, that the complex perspective vehicle symbol yields improved damping in the lateral response as compared to a flat two-dimensional predictor cross, but yields generally larger vertical deviations. Methods of using the predictor symbol in controlling the forward velocity of the vehicle are shown to be effective. The tunnel display with superimposed perspective vehicle symbol yields very satisfactory results and pilot acceptance in the lateral control but is found to be unsatisfactory in the vertical control, as a result of too large vertical path-angle deviations.

  8. Random matrix approach to cross correlations in financial data

    NASA Astrophysics Data System (ADS)

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene

    2002-06-01

    We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices C of returns constructed from (i) 30-min returns of 1000 US stocks for the 2-yr period 1994-1995, (ii) 30-min returns of 881 US stocks for the 2-yr period 1996-1997, and (iii) 1-day returns of 422 US stocks for the 35-yr period 1962-1996. We test the statistics of the eigenvalues λi of C against a ``null hypothesis'' - a random correlation matrix constructed from mutually uncorrelated time series. We find that a majority of the eigenvalues of C fall within the RMT bounds [λ-,λ+] for the eigenvalues of random correlation matrices. We test the eigenvalues of C within the RMT bound for universal properties of random matrices and find good agreement with the results for the Gaussian orthogonal ensemble of random matrices-implying a large degree of randomness in the measured cross-correlation coefficients. Further, we find that the distribution of eigenvector components for the eigenvectors corresponding to the eigenvalues outside the RMT bound display systematic deviations from the RMT prediction. In addition, we find that these ``deviating eigenvectors'' are stable in time. We analyze the components of the deviating eigenvectors and find that the largest eigenvalue corresponds to an influence common to all stocks. Our analysis of the remaining deviating eigenvectors shows distinct groups, whose identities correspond to conventionally identified business sectors. Finally, we discuss applications to the construction of portfolios of stocks that have a stable ratio of risk to return.

  9. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations

    PubMed Central

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs. PMID:27139732

  10. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations.

    PubMed

    Lubow, Bruce C; Ransom, Jason I

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.

  11. Power-Smoothing Scheme of a DFIG Using the Adaptive Gain Depending on the Rotor Speed and Frequency Deviation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyewon; Hwang, Min; Muljadi, Eduard

    In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  12. Power-Smoothing Scheme of a DFIG Using the Adaptive Gain Depending on the Rotor Speed and Frequency Deviation

    DOE PAGES

    Lee, Hyewon; Hwang, Min; Muljadi, Eduard; ...

    2017-04-18

    In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  13. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoro, J. P.; McNamara, J.; Yorke, E.

    2012-10-15

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged imagesmore » for determining tumor deviations. Methods: Eleven stage II-IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction, seven required a single correction, one required two corrections, and one required three corrections. Mean residual GTV deviation (3D distance) following GTV-based systematic correction (mean {+-} 1 standard deviation 4.8 {+-} 1.5 mm) is significantly lower than for systematic skeletal-based (6.5 {+-} 2.9 mm, p= 0.015), and weekly skeletal-based correction (7.2 {+-} 3.0 mm, p= 0.001), but is not significantly lower than daily skeletal-based correction (5.4 {+-} 2.6 mm, p= 0.34). In two cases, first-day CBCT images reveal tumor changes-one showing tumor growth, the other showing large tumor displacement-that are not readily observed in radiographs. Differences in computed GTV deviations between respiration-correlated and respiration-averaged images are 0.2 {+-} 1.8 mm in the superior-inferior direction and are of similar magnitude in the other directions. Conclusions: An off-line protocol to correct GTV-based systematic error in locally advanced lung tumor cases can be effective at reducing tumor deviations, although the findings need confirmation with larger patient statistics. In some cases, a single cone-beam CT can be useful for assessing tumor changes early in treatment, if more than a few days elapse between simulation and the start of treatment. Tumor deviations measured with respiration-averaged CT and CBCT images are consistent with those measured with respiration-correlated images; the respiration-averaged method is more easily implemented in the clinic.« less

  14. Analysis of using the tongue deviation angle as a warning sign of a stroke

    PubMed Central

    2012-01-01

    Background The symptom of tongue deviation is observed in a stroke or transient ischemic attack. Nevertheless, there is much room for the interpretation of the tongue deviation test. The crucial factor is the lack of an effective quantification method of tongue deviation. If we can quantify the features of the tongue deviation and scientifically verify the relationship between the deviation angle and a stroke, the information provided by the tongue will be helpful in recognizing a warning of a stroke. Methods In this study, a quantification method of the tongue deviation angle was proposed for the first time to characterize stroke patients. We captured the tongue images of stroke patients (15 males and 10 females, ranging between 55 and 82 years of age); transient ischemic attack (TIA) patients (16 males and 9 females, ranging between 53 and 79 years of age); and normal subjects (14 males and 11 females, ranging between 52 and 80 years of age) to analyze whether the method is effective. In addition, we used the receiver operating characteristic curve (ROC) for the sensitivity analysis, and determined the threshold value of the tongue deviation angle for the warning sign of a stroke. Results The means and standard deviations of the tongue deviation angles of the stroke, TIA, and normal groups were: 6.9 ± 3.1, 4.9 ± 2.1 and 1.4 ± 0.8 degrees, respectively. Analyzed by the unpaired Student’s t-test, the p-value between the stroke group and the TIA group was 0.015 (>0.01), indicating no significant difference in the tongue deviation angle. The p-values between the stroke group and the normal group, as well as between the TIA group and the normal group were both less than 0.01. These results show the significant differences in the tongue deviation angle between the patient groups (stroke and TIA patients) and the normal group. These results also imply that the tongue deviation angle can effectively identify the patient group (stroke and TIA patients) and the normal group. With respect to the visual examination, 40% and 32% of stroke patients, 24% and 16% of TIA patients, and 4% and 0% of normal subjects were found to have tongue deviations when physicians “A” and “B” examined them. The variation showed the essentiality of the quantification method in a clinical setting. In the receiver operating characteristic curve (ROC), the Area Under Curve (AUC, = 0.96) indicates good discrimination. The tongue deviation angle more than the optimum threshold value (= 3.2°) predicts a risk of stroke. Conclusions In summary, we developed an effective quantification method to characterize the tongue deviation angle, and we confirmed the feasibility of recognizing the tongue deviation angle as an early warning sign of an impending stroke. PMID:22908956

  15. Improvement of registration accuracy in accelerated partial breast irradiation using the point-based rigid-body registration algorithm for patients with implanted fiducial markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inoue, Minoru; Yoshimura, Michio, E-mail: myossy@kuhp.kyoto-u.ac.jp; Sato, Sayaka

    2015-04-15

    Purpose: To investigate image-registration errors when using fiducial markers with a manual method and the point-based rigid-body registration (PRBR) algorithm in accelerated partial breast irradiation (APBI) patients, with accompanying fiducial deviations. Methods: Twenty-two consecutive patients were enrolled in a prospective trial examining 10-fraction APBI. Titanium clips were implanted intraoperatively around the seroma in all patients. For image-registration, the positions of the clips in daily kV x-ray images were matched to those in the planning digitally reconstructed radiographs. Fiducial and gravity registration errors (FREs and GREs, respectively), representing resulting misalignments of the edge and center of the target, respectively, were comparedmore » between the manual and algorithm-based methods. Results: In total, 218 fractions were evaluated. Although the mean FRE/GRE values for the manual and algorithm-based methods were within 3 mm (2.3/1.7 and 1.3/0.4 mm, respectively), the percentages of fractions where FRE/GRE exceeded 3 mm using the manual and algorithm-based methods were 18.8%/7.3% and 0%/0%, respectively. Manual registration resulted in 18.6% of patients with fractions of FRE/GRE exceeding 5 mm. The patients with larger clip deviation had significantly more fractions showing large FRE/GRE using manual registration. Conclusions: For image-registration using fiducial markers in APBI, the manual registration results in more fractions with considerable registration error due to loss of fiducial objectivity resulting from their deviation. The authors recommend the PRBR algorithm as a safe and effective strategy for accurate, image-guided registration and PTV margin reduction.« less

  16. A deviation display method for visualising data in mobile gamma-ray spectrometry.

    PubMed

    Kock, Peder; Finck, Robert R; Nilsson, Jonas M C; Ostlund, Karl; Samuelsson, Christer

    2010-09-01

    A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded (137)Cs and (241)Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialization time of about 10min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.

  17. Quantifying the heterogeneity of the tectonic stress field using borehole data

    USGS Publications Warehouse

    Schoenball, Martin; Davatzes, Nicholas C.

    2017-01-01

    The heterogeneity of the tectonic stress field is a fundamental property which influences earthquake dynamics and subsurface engineering. Self-similar scaling of stress heterogeneities is frequently assumed to explain characteristics of earthquakes such as the magnitude-frequency relation. However, observational evidence for such scaling of the stress field heterogeneity is scarce.We analyze the local stress orientations using image logs of two closely spaced boreholes in the Coso Geothermal Field with sub-vertical and deviated trajectories, respectively, each spanning about 2 km in depth. Both the mean and the standard deviation of stress orientation indicators (borehole breakouts, drilling-induced fractures and petal-centerline fractures) determined from each borehole agree to the limit of the resolution of our method although measurements at specific depths may not. We find that the standard deviation in these boreholes strongly depends on the interval length analyzed, generally increasing up to a wellbore log length of about 600 m and constant for longer intervals. We find the same behavior in global data from the World Stress Map. This suggests that the standard deviation of stress indicators characterizes the heterogeneity of the tectonic stress field rather than the quality of the stress measurement. A large standard deviation of a stress measurement might be an expression of strong crustal heterogeneity rather than of an unreliable stress determination. Robust characterization of stress heterogeneity requires logs that sample stress indicators along a representative sample volume of at least 1 km.

  18. Large-aperture space optical system testing based on the scanning Hartmann.

    PubMed

    Wei, Haisong; Yan, Feng; Chen, Xindong; Zhang, Hao; Cheng, Qiang; Xue, Donglin; Zeng, Xuefeng; Zhang, Xuejun

    2017-03-10

    Based on the Hartmann testing principle, this paper proposes a novel image quality testing technology which applies to a large-aperture space optical system. Compared with the traditional testing method through a large-aperture collimator, the scanning Hartmann testing technology has great advantages due to its simple structure, low cost, and ability to perform wavefront measurement of an optical system. The basic testing principle of the scanning Hartmann testing technology, data processing method, and simulation process are presented in this paper. Certain simulation results are also given to verify the feasibility of this technology. Furthermore, a measuring system is developed to conduct a wavefront measurement experiment for a 200 mm aperture optical system. The small deviation (6.3%) of root mean square (RMS) between experimental results and interferometric results indicates that the testing system can measure low-order aberration correctly, which means that the scanning Hartmann testing technology has the ability to test the imaging quality of a large-aperture space optical system.

  19. First-Principles Momentum Dependent Local Ansatz Approach to the Momentum Distribution Function in Iron-Group Transition Metals

    NASA Astrophysics Data System (ADS)

    Kakehashi, Yoshiro; Chandra, Sumal

    2017-03-01

    The momentum distribution function (MDF) bands of iron-group transition metals from Sc to Cu have been investigated on the basis of the first-principles momentum dependent local ansatz wavefunction method. It is found that the MDF for d electrons show a strong momentum dependence and a large deviation from the Fermi-Dirac distribution function along high-symmetry lines of the first Brillouin zone, while the sp electrons behave as independent electrons. In particular, the deviation in bcc Fe (fcc Ni) is shown to be enhanced by the narrow eg (t2g) bands with flat dispersion in the vicinity of the Fermi level. Mass enhancement factors (MEF) calculated from the jump on the Fermi surface are also shown to be momentum dependent. Large mass enhancements of Mn and Fe are found to be caused by spin fluctuations due to d electrons, while that for Ni is mainly caused by charge fluctuations. Calculated MEF are consistent with electronic specific heat data as well as recent angle resolved photoemission spectroscopy data.

  20. Deconstructing the Essential Elements of Bat Flight

    NASA Astrophysics Data System (ADS)

    Tafti, Danesh; Viswanath, Kamal; Krishnamurthy, Nagendra

    2013-11-01

    There are over 1000 bat species worldwide with a wide range of wing morphologies. Bat wing motion is characterized by an active adaptive three-dimensional highly deformable wing surface which is distinctive in its complex kinematics facilitated by the skeletal and skin membrane manipulation, large deviations from the stroke plane, and large wing cambers. In this study we use measured wing kinematics of a fruit bat in a straight line climbing path to study the fluid dynamics and the forces generated by the wing using an Immersed Boundary Method. This is followed by a proper orthogonal decomposition to investigate the dimensional complexity as well as the key kinematic modes used by the bat during a representative flapping cycle. It is shown that the complex wing motion of the fruit bat can mostly be broken down into canonical descriptors of wing motion such as translation, rotation, out of stroke deviation, and cambering, which the bat uses with great efficacy to generate lift and thrust. Research supported through a grant from the Army Research Office (ARO). Bat wing kinemtaics was provided by Dr. Kenny Breuer, Brown University.

  1. Weibull Modulus Estimated by the Non-linear Least Squares Method: A Solution to Deviation Occurring in Traditional Weibull Estimation

    NASA Astrophysics Data System (ADS)

    Li, T.; Griffiths, W. D.; Chen, J.

    2017-11-01

    The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws ( i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.

  2. Large deviations and mixing for dissipative PDEs with unbounded random kicks

    NASA Astrophysics Data System (ADS)

    Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.

    2018-02-01

    We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.

  3. Transition-state optimization by the free energy gradient method: Application to aqueous-phase Menshutkin reaction between ammonia and methyl chloride

    NASA Astrophysics Data System (ADS)

    Hirao, Hajime; Nagae, Yukihiko; Nagaoka, Masataka

    2001-11-01

    The transition state (TS) for the Menshutkin reaction H 3N+CH 3Cl→H 3NCH 3++Cl - in aqueous solution was located on the free energy surface (FES) by the free energy gradient (FEG) method. The solute-solvent system was described by a hybrid quantum mechanical and molecular mechanical (QM/MM) method. The reaction path in water was found to deviate largely from that in the gas phase. It was concluded that, in such a reaction including charge separation, TS structure optimization on an FES is inevitable for obtaining valid information about a TS in solution.

  4. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    PubMed

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.

  5. Regional flow duration curves: Geostatistical techniques versus multivariate regression

    USGS Publications Warehouse

    Pugliese, Alessio; Farmer, William H.; Castellarin, Attilio; Archfield, Stacey A.; Vogel, Richard M.

    2016-01-01

    A period-of-record flow duration curve (FDC) represents the relationship between the magnitude and frequency of daily streamflows. Prediction of FDCs is of great importance for locations characterized by sparse or missing streamflow observations. We present a detailed comparison of two methods which are capable of predicting an FDC at ungauged basins: (1) an adaptation of the geostatistical method, Top-kriging, employing a linear weighted average of dimensionless empirical FDCs, standardised with a reference streamflow value; and (2) regional multiple linear regression of streamflow quantiles, perhaps the most common method for the prediction of FDCs at ungauged sites. In particular, Top-kriging relies on a metric for expressing the similarity between catchments computed as the negative deviation of the FDC from a reference streamflow value, which we termed total negative deviation (TND). Comparisons of these two methods are made in 182 largely unregulated river catchments in the southeastern U.S. using a three-fold cross-validation algorithm. Our results reveal that the two methods perform similarly throughout flow-regimes, with average Nash-Sutcliffe Efficiencies 0.566 and 0.662, (0.883 and 0.829 on log-transformed quantiles) for the geostatistical and the linear regression models, respectively. The differences between the reproduction of FDC's occurred mostly for low flows with exceedance probability (i.e. duration) above 0.98.

  6. Convex hulls of random walks in higher dimensions: A large-deviation study

    NASA Astrophysics Data System (ADS)

    Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.

    2017-12-01

    The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .

  7. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  8. Work fluctuations for a Brownian particle between two thermostats

    NASA Astrophysics Data System (ADS)

    Visco, Paolo

    2006-06-01

    We explicitly determine the large deviation function of the energy flow of a Brownian particle coupled to two heat baths at different temperatures. This toy model, initially introduced by Derrida and Brunet (2005, Einstein aujourd'hui (Les Ulis: EDP Sciences)), not only allows us to sort out the influence of initial conditions on large deviation functions but also allows us to pinpoint various restrictions bearing upon the range of validity of the Fluctuation Relation.

  9. 40 CFR 61.207 - Radium-226 sampling and measurement procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...

  10. Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions

    NASA Astrophysics Data System (ADS)

    Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard

    2017-12-01

    Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).

  11. Deviations from Newton's law in supersymmetric large extra dimensions

    NASA Astrophysics Data System (ADS)

    Callin, P.; Burgess, C. P.

    2006-09-01

    Deviations from Newton's inverse-squared law at the micron length scale are smoking-gun signals for models containing supersymmetric large extra dimensions (SLEDs), which have been proposed as approaches for resolving the cosmological constant problem. Just like their non-supersymmetric counterparts, SLED models predict gravity to deviate from the inverse-square law because of the advent of new dimensions at sub-millimeter scales. However SLED models differ from their non-supersymmetric counterparts in three important ways: (i) the size of the extra dimensions is fixed by the observed value of the dark energy density, making it impossible to shorten the range over which new deviations from Newton's law must be seen; (ii) supersymmetry predicts there to be more fields in the extra dimensions than just gravity, implying different types of couplings to matter and the possibility of repulsive as well as attractive interactions; and (iii) the same mechanism which is purported to keep the cosmological constant naturally small also keeps the extra-dimensional moduli effectively massless, leading to deviations from general relativity in the far infrared of the scalar-tensor form. We here explore the deviations from Newton's law which are predicted over micron distances, and show the ways in which they differ and resemble those in the non-supersymmetric case.

  12. Analysis of using the tongue deviation angle as a warning sign of a stroke.

    PubMed

    Wei, Ching-Chuan; Huang, Shu-Wen; Hsu, Sheng-Lin; Chen, Hsing-Chung; Chen, Jong-Shin; Liang, Hsinying

    2012-08-21

    The symptom of tongue deviation is observed in a stroke or transient ischemic attack. Nevertheless, there is much room for the interpretation of the tongue deviation test. The crucial factor is the lack of an effective quantification method of tongue deviation. If we can quantify the features of the tongue deviation and scientifically verify the relationship between the deviation angle and a stroke, the information provided by the tongue will be helpful in recognizing a warning of a stroke. In this study, a quantification method of the tongue deviation angle was proposed for the first time to characterize stroke patients. We captured the tongue images of stroke patients (15 males and 10 females, ranging between 55 and 82 years of age); transient ischemic attack (TIA) patients (16 males and 9 females, ranging between 53 and 79 years of age); and normal subjects (14 males and 11 females, ranging between 52 and 80 years of age) to analyze whether the method is effective. In addition, we used the receiver operating characteristic curve (ROC) for the sensitivity analysis, and determined the threshold value of the tongue deviation angle for the warning sign of a stroke. The means and standard deviations of the tongue deviation angles of the stroke, TIA, and normal groups were: 6.9 ± 3.1, 4.9 ± 2.1 and 1.4 ± 0.8 degrees, respectively. Analyzed by the unpaired Student's t-test, the p-value between the stroke group and the TIA group was 0.015 (>0.01), indicating no significant difference in the tongue deviation angle. The p-values between the stroke group and the normal group, as well as between the TIA group and the normal group were both less than 0.01. These results show the significant differences in the tongue deviation angle between the patient groups (stroke and TIA patients) and the normal group. These results also imply that the tongue deviation angle can effectively identify the patient group (stroke and TIA patients) and the normal group. With respect to the visual examination, 40% and 32% of stroke patients, 24% and 16% of TIA patients, and 4% and 0% of normal subjects were found to have tongue deviations when physicians "A" and "B" examined them. The variation showed the essentiality of the quantification method in a clinical setting. In the receiver operating characteristic curve (ROC), the Area Under Curve (AUC, = 0.96) indicates good discrimination. The tongue deviation angle more than the optimum threshold value (= 3.2°) predicts a risk of stroke. In summary, we developed an effective quantification method to characterize the tongue deviation angle, and we confirmed the feasibility of recognizing the tongue deviation angle as an early warning sign of an impending stroke.

  13. Electron-beam conditioning by thomson scattering.

    PubMed

    Schroeder, C B; Esarey, E; Leemans, W P

    2004-11-05

    A method is proposed for conditioning electron beams via Thomson scattering. The conditioning provides a quadratic correlation between the electron energy deviation and the betatron amplitude of the electrons, which results in enhanced gain in free-electron lasers. Quantum effects imply conditioning must occur at high laser fluence and moderate electron energy. Conditioning of x-ray free-electron lasers should be achievable with present laser technology, leading to significant size and cost reductions of these large-scale facilities.

  14. Complexity analysis based on generalized deviation for financial markets

    NASA Astrophysics Data System (ADS)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  15. MO-F-CAMPUS-T-03: Data Driven Approaches for Determination of Treatment Table Tolerance Values for Record and Verification Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, N; DiCostanzo, D; Fullenkamp, M

    2015-06-15

    Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less

  16. SU-F-J-29: Dosimetric Effect of Image Registration ROI Size and Focus in Automated CBCT Registration for Spine SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnelli, A; Smith, A; Chao, S

    2016-06-15

    Purpose: Spinal stereotactic body radiotherapy (SBRT) involves highly conformal dose distributions and steep dose gradients due to the proximity of the spinal cord to the treatment volume. To achieve the planning goals while limiting the spinal cord dose, patients are setup using kV cone-beam CT (kV-CBCT) with 6 degree corrections. The kV-CBCT registration with the reference CT is dependent on a user selected region of interest (ROI). The objective of this work is to determine the dosimetric impact of ROI selection. Methods: Twenty patients were selected for this study. For each patient, the kV-CBCT was registered to the reference CTmore » using three ROIs including: 1) the external body, 2) a large anatomic region, and 3) a small region focused in the target volume. Following each registration, the aligned CBCTs and contours were input to the treatment planning system for dose evaluation. The minimum dose, dose to 99% and 90% of the tumor volume (D99%, D90%), dose to 0.03cc and the dose to 10% of the spinal cord subvolume (V10Gy) were compared to the planned values. Results: The average deviations in the tumor minimum dose were 2.68%±1.7%, 4.6%±4.0%, 14.82%±9.9% for small, large and the external ROIs, respectively. The average deviations in tumor D99% were 1.15%±0.7%, 3.18%±1.7%, 10.0%±6.6%, respectively. The average deviations in tumor D90% were 1.00%±0.96%, 1.14%±1.05%, 3.19%±4.77% respectively. The average deviations in the maximum dose to the spinal cord were 2.80%±2.56%, 7.58%±8.28%, 13.35%±13.14%, respectively. The average deviation in V10Gy to the spinal cord were 1.69%±0.88%, 1.98%±2.79%, 2.71%±5.63%. Conclusion: When using automated registration algorithms for CBCT-Reference alignment, a small target-focused ROI results in the least dosimetric deviation from the plan. It is recommended to focus narrowly on the target volume to keep the spinal cord dose below tolerance.« less

  17. A Correlational Study of Scoliosis and Trunk Balance in Adult Patients with Mandibular Deviation

    PubMed Central

    Yang, Yang; Wang, Na; Wang, Wenyong; Ding, Yin; Sun, Shiyao

    2013-01-01

    Previous studies have confirmed that patients with mandibular deviation often have abnormal morphology of their cervical vertebrae. However, the relationship between mandibular deviation, scoliosis, and trunk balance has not been studied. Currently, mandibular deviation is usually treated as a single pathology, which leads to poor clinical efficiency. We investigated the relationship of spine coronal morphology and trunk balance in adult patients with mandibular deviation, and compared the finding to those in healthy volunteers. 35 adult patients with skeletal mandibular deviation and 10 healthy volunteers underwent anterior X-ray films of the head and posteroanterior X-ray films of the spine. Landmarks and lines were drawn and measured on these films. The axis distance method was used to measure the degree of scoliosis and the balance angle method was used to measure trunk balance. The relationship of mandibular deviation, spine coronal morphology and trunk balance was evaluated with the Pearson correlation method. The spine coronal morphology of patients with mandibular deviation demonstrated an “S” type curve, while a straight line parallel with the gravity line was found in the control group (significant difference, p<0.01). The trunk balance of patients with mandibular deviation was disturbed (imbalance angle >1°), while the control group had a normal trunk balance (imbalance angle <1°). There was a significant difference between the two groups (p<0.01). The degree of scoliosis and shoulder imbalance correlated with the degree of mandibular deviation, and presented a linear trend. The direction of mandibular deviation was the same as that of the lateral bending of thoracolumbar vertebrae, which was opposite to the direction of lateral bending of cervical vertebrae. Our study shows the degree of mandibular deviation has a high correlation with the degree of scoliosis and trunk imbalance, all the three deformities should be clinically evaluated in the management of mandibular deviation. PMID:23555836

  18. The gravitational-optical methods for examination of the hypothesis about galaxies and antigalaxies in the Universe

    NASA Astrophysics Data System (ADS)

    Gribov, I. A.; Trigger, S. A.

    2018-01-01

    The optical-gravitational methods for distinction between photons and antiphotons (galaxies, emitting photons and antigalaxies, emitting antiphotons) in the proposed hypothesis of totally gravitationally neutral (TGN)-Universe are considered. These methods are based on the extension of the earlier proposed the gravitationally neutral Universe concept, including now gravitational neutrality of vacuum. This concept contains (i) enlarged unbroken baryon-like, charge, parity and time and full ±M gr gravitational symmetries between all massive elementary particles-antiparticles, including (ia) ordinary matter (OM)-ordinary antimatter (OAM), (ib) dark matter (DM)-dark antimatter (DAM) and (ii) the resulting gravitational repulsion between equally presented (OM+DM)-galactic and (OAM+DAM)-antigalactic clusters, what spatially isolates and preserves their mutual annihilations in the large-scale TGN-Universe. It is assumed the gravitational balance not only between positive and negative gravitational masses of elementary particles and antiparticles, but also between all massless fields of the quantum field theory (QFT), including the opposite gravitational properties of photons and antiphotons, etc, realizing the totally gravitationally neutral vacuum in the QFT. These photons and antiphotons could be distinguishable optically-gravitationally, if one can observe a massive, deviating OM-star or a deviating (OM+DM)-galaxy from our galactic group, moving fast enough on the heavenly sphere, crossing the line directed to spatially separated far-remote galactic clusters (with the visible OM-markers, emitting photons) or antigalactic cluster (with the visible OAM-markers, emitting antiphotons). The deviations and gravitational microlensing with temporarily increased or decreased brightness of their OM and OAM rays will be opposite, indicating the galaxies and antigalaxies in the Universe.

  19. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  20. Methodenvergleich zur Bestimmung der hydraulischen Durchlässigkeit

    NASA Astrophysics Data System (ADS)

    Storz, Katharina; Steger, Hagen; Wagner, Valentin; Bayer, Peter; Blum, Philipp

    2017-06-01

    Knowing the hydraulic conductivity (K) is a precondition for understanding groundwater flow processes in the subsurface. Numerous laboratory and field methods for the determination of hydraulic conductivity exist, which can lead to significantly different results. In order to quantify the variability of these various methods, the hydraulic conductivity was examined for an industrial silica sand (Dorsilit) using four different methods: (1) grain-size analysis, (2) Kozeny-Carman approach, (3) permeameter tests and (4) flow rate experiments in large-scale tank experiments. Due to the large volume of the artificially built aquifer, the tank experiment results are assumed to be the most representative. Hydraulic conductivity values derived from permeameter tests show only minor deviation, while results of the empirically evaluated grain-size analysis are about one magnitude higher and show great variances. The latter was confirmed by the analysis of several methods for the determination of K-values found in the literature, thus we generally question the suitability of grain-size analyses and strongly recommend the use of permeameter tests.

  1. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  2. A novel multi-segment path analysis based on a heterogeneous velocity model for the localization of acoustic emission sources in complex propagation media.

    PubMed

    Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas

    2017-02-01

    In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A Simple Accurate Alternative to the Minimum-Deviation Method for the Determination of the Refractive Index of a Prism.

    ERIC Educational Resources Information Center

    Waldenstrom, S.; Naqvi, K. Razi

    1978-01-01

    Proposes an alternative to the classical minimum-deviation method for determining the refractive index of a prism. This new "fixed angle of incidence method" may find applications in research. (Author/GA)

  4. Welding deviation detection algorithm based on extremum of molten pool image contour

    NASA Astrophysics Data System (ADS)

    Zou, Yong; Jiang, Lipei; Li, Yunhua; Xue, Long; Huang, Junfen; Huang, Jiqiang

    2016-01-01

    The welding deviation detection is the basis of robotic tracking welding, but the on-line real-time measurement of welding deviation is still not well solved by the existing methods. There is plenty of information in the gas metal arc welding(GMAW) molten pool images that is very important for the control of welding seam tracking. The physical meaning for the curvature extremum of molten pool contour is revealed by researching the molten pool images, that is, the deviation information points of welding wire center and the molten tip center are the maxima and the local maxima of the contour curvature, and the horizontal welding deviation is the position difference of these two extremum points. A new method of weld deviation detection is presented, including the process of preprocessing molten pool images, extracting and segmenting the contours, obtaining the contour extremum points, and calculating the welding deviation, etc. Extracting the contours is the premise, segmenting the contour lines is the foundation, and obtaining the contour extremum points is the key. The contour images can be extracted with the method of discrete dyadic wavelet transform, which is divided into two sub contours including welding wire and molten tip separately. The curvature value of each point of the two sub contour lines is calculated based on the approximate curvature formula of multi-points for plane curve, and the two points of the curvature extremum are the characteristics needed for the welding deviation calculation. The results of the tests and analyses show that the maximum error of the obtained on-line welding deviation is 2 pixels(0.16 mm), and the algorithm is stable enough to meet the requirements of the pipeline in real-time control at a speed of less than 500 mm/min. The method can be applied to the on-line automatic welding deviation detection.

  5. Sentiment analysis of feature ranking methods for classification accuracy

    NASA Astrophysics Data System (ADS)

    Joseph, Shashank; Mugauri, Calvin; Sumathy, S.

    2017-11-01

    Text pre-processing and feature selection are important and critical steps in text mining. Text pre-processing of large volumes of datasets is a difficult task as unstructured raw data is converted into structured format. Traditional methods of processing and weighing took much time and were less accurate. To overcome this challenge, feature ranking techniques have been devised. A feature set from text preprocessing is fed as input for feature selection. Feature selection helps improve text classification accuracy. Of the three feature selection categories available, the filter category will be the focus. Five feature ranking methods namely: document frequency, standard deviation information gain, CHI-SQUARE, and weighted-log likelihood -ratio is analyzed.

  6. More reliable inference for the dissimilarity index of segregation

    PubMed Central

    Allen, Rebecca; Burgess, Simon; Davidson, Russell; Windmeijer, Frank

    2015-01-01

    Summary The most widely used measure of segregation is the so‐called dissimilarity index. It is now well understood that this measure also reflects randomness in the allocation of individuals to units (i.e. it measures deviations from evenness, not deviations from randomness). This leads to potentially large values of the segregation index when unit sizes and/or minority proportions are small, even if there is no underlying systematic segregation. Our response to this is to produce adjustments to the index, based on an underlying statistical model. We specify the assignment problem in a very general way, with differences in conditional assignment probabilities underlying the resulting segregation. From this, we derive a likelihood ratio test for the presence of any systematic segregation, and bias adjustments to the dissimilarity index. We further develop the asymptotic distribution theory for testing hypotheses concerning the magnitude of the segregation index and show that the use of bootstrap methods can improve the size and power properties of test procedures considerably. We illustrate these methods by comparing dissimilarity indices across school districts in England to measure social segregation. PMID:27774035

  7. Proximity Effect Correction by Pattern Modified Stencil Mask in Large-Field Projection Electron-Beam Lithography

    NASA Astrophysics Data System (ADS)

    Kobinata, Hideo; Yamashita, Hiroshi; Nomura, Eiichi; Nakajima, Ken; Kuroki, Yukinori

    1998-12-01

    A new method for proximity effect correction, suitable for large-field electron-beam (EB) projection lithography with high accelerating voltage, such as SCALPEL and PREVAIL in the case where a stencil mask is used, is discussed. In this lithography, a large-field is exposed by the same dose, and thus, the dose modification method, which is used in the variable-shaped beam and the cell projection methods, cannot be used in this case. In this study, we report on development of a new proximity effect correction method which uses a pattern modified stencil mask suitable for high accelerating voltage and large-field EB projection lithography. In order to obtain the mask bias value, we have investigated linewidth reduction, due to the proximity effect, in the peripheral memory cell area, and found that it could be expressed by a simple function and all the correction parameters were easily determined from only the mask pattern data. The proximity effect for the peripheral array pattern could also be corrected by considering the pattern density. Calculated linewidth deviation was 3% or less for a 0.07-µm-L/S memory cell pattern and 5% or less for a 0.14-µm-line and 0.42-µm-space peripheral array pattern, simultaneously.

  8. Efficient characterisation of large deviations using population dynamics

    NASA Astrophysics Data System (ADS)

    Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.

    2018-05-01

    We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.

  9. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    PubMed

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  10. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  11. Prediction of the Possibility a Right-Turn Driving Behavior at Intersection Leads to an Accident by Detecting Deviation of the Situation from Usual when the Behavior is Observed

    NASA Astrophysics Data System (ADS)

    Hayashi, Toshinori; Yamada, Keiichi

    Deviation of driving behavior from usual could be a sign of human error that increases the risk of traffic accidents. This paper proposes a novel method for predicting the possibility a driving behavior leads to an accident from the information on the driving behavior and the situation. In a previous work, a method of predicting the possibility by detecting the deviation of driving behavior from usual one in that situation has been proposed. In contrast, the method proposed in this paper predicts the possibility by detecting the deviation of the situation from usual one when the behavior is observed. An advantage of the proposed method is the number of the required models is independent of the variety of the situations. The method was applied to a problem of predicting accidents by right-turn driving behavior at an intersection, and the performance of the method was evaluated by experiments on a driving simulator.

  12. MISFITS: evaluating the goodness of fit between a phylogenetic model and an alignment.

    PubMed

    Nguyen, Minh Anh Thi; Klaere, Steffen; von Haeseler, Arndt

    2011-01-01

    As models of sequence evolution become more and more complicated, many criteria for model selection have been proposed, and tools are available to select the best model for an alignment under a particular criterion. However, in many instances the selected model fails to explain the data adequately as reflected by large deviations between observed pattern frequencies and the corresponding expectation. We present MISFITS, an approach to evaluate the goodness of fit (http://www.cibiv.at/software/misfits). MISFITS introduces a minimum number of "extra substitutions" on the inferred tree to provide a biologically motivated explanation why the alignment may deviate from expectation. These extra substitutions plus the evolutionary model then fully explain the alignment. We illustrate the method on several examples and then give a survey about the goodness of fit of the selected models to the alignments in the PANDIT database.

  13. Impact of Penetration Wind Turbines on Transient Stability in Sulbagsel Electrical Interconnection System

    NASA Astrophysics Data System (ADS)

    Nurtrimarini Karim, Andi; Mawar Said, Sri; Chaerah Gunadin, Indar; Darusman B, Mustadir

    2018-03-01

    This paper presents a rotor angle analysis when transient disturbance occurs when wind turbines enter the southern Sulawesi electrical interconnection system (Sulbagsel) both without and with the addition of a Power Stabilizer (PSS) control device. Time domain simulation (TDS) method is used to analyze the rotor angle deviation (δ) and rotor angle velocity (ω). A total of 44 buses, 47 lines, 6 transformers, 15 generators and 34 loads were modeled for analysis after the inclusion of large-scale wind turbines in the Sidrap and Jeneponto areas. The simulation and computation results show the addition of PSS devices to the system when transient disturbance occurs when the winds turbine entering the Sulbagsel electrical system is able to dampen and improve the rotor angle deviation (δ) and the rotor angle velocity (ω) towards better thus helping the system to continue operation at a new equilibrium point.

  14. Identifying resonance frequency deviations for high order nano-wire ring resonator filters based on a coupling strength variation

    NASA Astrophysics Data System (ADS)

    Park, Sahnggi; Kim, Kap-Joong; Kim, Duk-Jun; Kim, Gyungock

    2009-02-01

    Third order ring resonators are designed and their resonance frequency deviations are analyzed experimentally by processing them with E-beam lithography and ICP etching in a CMOS nano-Fabrication laboratory. We developed a reliable method to identify and reduce experimentally the degree of deviation of each ring resonance frequency before completion of the fabrication process. The identified deviations can be minimized by the way to be presented in this paper. It is expected that this method will provide a significant clue to make a high order multi-channel ring resonators.

  15. ERP correlates of unexpected word forms in a picture–word study of infants and adults

    PubMed Central

    Duta, M.D.; Styles, S.J.; Plunkett, K.

    2012-01-01

    We tested 14-month-olds and adults in an event-related potentials (ERPs) study in which pictures of familiar objects generated expectations about upcoming word forms. Expected word forms labelled the picture (word condition), while unexpected word forms mismatched by either a small deviation in word medial vowel height (mispronunciation condition) or a large deviation from the onset of the first speech segment (pseudoword condition). Both infants and adults showed sensitivity to both types of unexpected word form. Adults showed a chain of discrete effects: positivity over the N1 wave, negativity over the P2 wave (PMN effect) and negativity over the N2 wave (N400 effect). Infants showed a similar pattern, including a robust effect similar to the adult P2 effect. These observations were underpinned by a novel visualisation method which shows the dynamics of the ERP within bands of the scalp over time. The results demonstrate shared processing mechanisms across development, as even subtle deviations from expected word forms were indexed in both age groups by a reduction in the amplitude of characteristic waves in the early auditory evoked potential. PMID:22483072

  16. Lower Current Large Deviations for Zero-Range Processes on a Ring

    NASA Astrophysics Data System (ADS)

    Chleboun, Paul; Grosskinsky, Stefan; Pizzoferrato, Andrea

    2017-04-01

    We study lower large deviations for the current of totally asymmetric zero-range processes on a ring with concave current-density relation. We use an approach by Jensen and Varadhan which has previously been applied to exclusion processes, to realize current fluctuations by travelling wave density profiles corresponding to non-entropic weak solutions of the hyperbolic scaling limit of the process. We further establish a dynamic transition, where large deviations of the current below a certain value are no longer typically attained by non-entropic weak solutions, but by condensed profiles, where a non-zero fraction of all the particles accumulates on a single fixed lattice site. This leads to a general characterization of the rate function, which is illustrated by providing detailed results for four generic examples of jump rates, including constant rates, decreasing rates, unbounded sublinear rates and asymptotically linear rates. Our results on the dynamic transition are supported by numerical simulations using a cloning algorithm.

  17. The most likely voltage path and large deviations approximations for integrate-and-fire neurons.

    PubMed

    Paninski, Liam

    2006-08-01

    We develop theory and numerical methods for computing the most likely subthreshold voltage path of a noisy integrate-and-fire (IF) neuron, given observations of the neuron's superthreshold spiking activity. This optimal voltage path satisfies a second-order ordinary differential (Euler-Lagrange) equation which may be solved analytically in a number of special cases, and which may be solved numerically in general via a simple "shooting" algorithm. Our results are applicable for both linear and nonlinear subthreshold dynamics, and in certain cases may be extended to correlated subthreshold noise sources. We also show how this optimal voltage may be used to obtain approximations to (1) the likelihood that an IF cell with a given set of parameters was responsible for the observed spike train; and (2) the instantaneous firing rate and interspike interval distribution of a given noisy IF cell. The latter probability approximations are based on the classical Freidlin-Wentzell theory of large deviations principles for stochastic differential equations. We close by comparing this most likely voltage path to the true observed subthreshold voltage trace in a case when intracellular voltage recordings are available in vitro.

  18. Variant Alleles, Triallelic Patterns, and Point Mutations Observed in Nuclear Short Tandem Repeat Typing of Populations in Bosnia and Serbia

    PubMed Central

    Huel, René L. M.; Bašić, Lara; Madacki-Todorović, Kamelija; Smajlović, Lejla; Eminović, Izet; Berbić, Irfan; Miloš, Ana; Parsons, Thomas J.

    2007-01-01

    Aim To present a compendium of off-ladder alleles and other genotyping irregularities relating to rare/unexpected population genetic variation, observed in a large short tandem repeat (STR) database from Bosnia and Serbia. Methods DNA was extracted from blood stain cards relating to reference samples from a population of 32 800 individuals from Bosnia and Serbia, and typed using Promega’s PowerPlex®16 STR kit. Results There were 31 distinct off-ladder alleles were observed in 10 of the 15 STR loci amplified from the PowerPlex®16 STR kit. Of these 31 alleles, 3 have not been previously reported. Furthermore, 16 instances of triallelic patterns were observed in 9 of the 15 loci. Primer binding site mismatches that affected amplification were observed in two loci, D5S818 and D8S1179. Conclusion Instances of deviations from manufacturer’s allelic ladders should be expected and caution taken to properly designate the correct alleles in large DNA databases. Particular care should be taken in kinship matching or paternity cases as incorrect designation of any of these deviations from allelic ladders could lead to false exclusions. PMID:17696304

  19. Beam deviation method as a diagnostic tool for the plasma focus.

    PubMed

    Schmidt, H; Rückle, B

    1978-04-15

    The application of an optical method for density measurements in cylindrical plasmas is described. The angular deviation of a probing light beam sent through a plasma is proportional to the maximum of the density in the plasma column. The deviation does not depend on the plasma dimensions; however, it is influenced to a certain degree by the density profile. The method is successfully applied to the investigation of a dense plasma focus with a time resolution of 2 nsec and a spatial resolution (in axial direction) of 2 mm.

  20. SU-C-207B-04: Automated Segmentation of Pectoral Muscle in MR Images of Dense Breasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verburg, E; Waard, SN de; Veldhuis, WB

    Purpose: To develop and evaluate a fully automated method for segmentation of the pectoral muscle boundary in Magnetic Resonance Imaging (MRI) of dense breasts. Methods: Segmentation of the pectoral muscle is an important part of automatic breast image analysis methods. Current methods for segmenting the pectoral muscle in breast MRI have difficulties delineating the muscle border correctly in breasts with a large proportion of fibroglandular tissue (i.e., dense breasts). Hence, an automated method based on dynamic programming was developed, incorporating heuristics aimed at shape, location and gradient features.To assess the method, the pectoral muscle was segmented in 91 randomly selectedmore » participants (mean age 56.6 years, range 49.5–75.2 years) from a large MRI screening trial in women with dense breasts (ACR BI-RADS category 4). Each MR dataset consisted of 178 or 179 T1-weighted images with voxel size 0.64 × 0.64 × 1.00 mm3. All images (n=16,287) were reviewed and scored by a radiologist. In contrast to volume overlap coefficients, such as DICE, the radiologist detected deviations in the segmented muscle border and determined whether the result would impact the ability to accurately determine the volume of fibroglandular tissue and detection of breast lesions. Results: According to the radiologist’s scores, 95.5% of the slices did not mask breast tissue in such way that it could affect detection of breast lesions or volume measurements. In 13.1% of the slices a deviation in the segmented muscle border was present which would not impact breast lesion detection. In 70 datasets (78%) at least 95% of the slices were segmented in such a way it would not affect detection of breast lesions, and in 60 (66%) datasets this was 100%. Conclusion: Dynamic programming with dedicated heuristics shows promising potential to segment the pectoral muscle in women with dense breasts.« less

  1. Anomaly detection in reconstructed quantum states using a machine-learning technique

    NASA Astrophysics Data System (ADS)

    Hara, Satoshi; Ono, Takafumi; Okamoto, Ryo; Washio, Takashi; Takeuchi, Shigeki

    2014-02-01

    The accurate detection of small deviations in given density matrices is important for quantum information processing. Here we propose a method based on the concept of data mining. We demonstrate that the proposed method can more accurately detect small erroneous deviations in reconstructed density matrices, which contain intrinsic fluctuations due to the limited number of samples, than a naive method of checking the trace distance from the average of the given density matrices. This method has the potential to be a key tool in broad areas of physics where the detection of small deviations of quantum states reconstructed using a limited number of samples is essential.

  2. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Astrophysics Data System (ADS)

    Godines, Cody R.; Manteufel, Randall D.

    2002-12-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  3. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  4. Research on frequency control strategy of interconnected region based on fuzzy PID

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Li, Chunlan

    2018-05-01

    In order to improve the frequency control performance of the interconnected power grid, overcome the problems of poor robustness and slow adjustment of traditional regulation, the paper puts forward a frequency control method based on fuzzy PID. The method takes the frequency deviation and tie-line deviation of each area as the control objective, takes the regional frequency deviation and its deviation as input, and uses fuzzy mathematics theory, adjusts PID control parameters online. By establishing the regional frequency control model of water-fire complementary power generation in MATLAB, the regional frequency control strategy is given, and three control modes (TBC-FTC, FTC-FTC, FFC-FTC) are simulated and analyzed. The simulation and experimental results show that, this method has better control performance compared with the traditional regional frequency regulation.

  5. Prediction of peak response values of structures with and without TMD subjected to random pedestrian flows

    NASA Astrophysics Data System (ADS)

    Lievens, Klaus; Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter

    2016-09-01

    In civil engineering and architecture, the availability of high strength materials and advanced calculation techniques enables the construction of slender footbridges, generally highly sensitive to human-induced excitation. Due to the inherent random character of the human-induced walking load, variability on the pedestrian characteristics must be considered in the response simulation. To assess the vibration serviceability of the footbridge, the statistics of the stochastic dynamic response are evaluated by considering the instantaneous peak responses in a time range. Therefore, a large number of time windows are needed to calculate the mean value and standard deviation of the instantaneous peak values. An alternative method to evaluate the statistics is based on the standard deviation of the response and a characteristic frequency as proposed in wind engineering applications. In this paper, the accuracy of this method is evaluated for human-induced vibrations. The methods are first compared for a group of pedestrians crossing a lightly damped footbridge. Small differences of the instantaneous peak value were found by the method using second order statistics. Afterwards, a TMD tuned to reduce the peak acceleration to a comfort value, was added to the structure. The comparison between both methods in made and the accuracy is verified. It is found that the TMD parameters are tuned sufficiently and good agreements between the two methods are found for the estimation of the instantaneous peak response for a strongly damped structure.

  6. Accuracy of various impression materials and methods for two implant systems: An effect size study.

    PubMed

    Schmidt, Alexander; Häussling, Teresa; Rehmann, Peter; Schaaf, Heidrun; Wöstmann, Bernd

    2018-04-01

    An accurate impression is required for implant treatment. The aim of this in-vitro study was to determine the effect size of the impression material/method, implant system and implant angulation on impression transfer precision. An upper jaw model with three BEGO and three Straumann implants (angulations 0°, 15°, 20°) in the left and right maxilla was used as a reference model. One polyether (Impregum Penta) and two polyvinyl siloxanes (Flexitime Monophase/Aquasil Ultra Monophase) were examined with two impression techniques (open and closed tray). A total of 60 impressions were made. A coordinate measurement machine was used to measure the target variables for 3D-shift, implant axis inclination and implant axis rotation. All the data were subjected to a four-way ANOVA. The effect size (partial eta-squared [η 2 P ]) was reported. The impression material had a significant influence on the 3D shift and the implant axis inclination deviation (p-values=.000), and both factors had very large effect sizes (3D-shift [η 2 P ]=.599; implant axis inclination [η 2 P ]=.298). Impressions made with polyvinyl siloxane exhibited the highest transfer precision. When the angulation of the implants was larger, more deviations occurred for the implant axis rotational deviation. The implant systems and impression methods showed partially significant variations (p-values=.001-.639) but only very small effect sizes (η 2 P =.001-.031). The impression material had the greatest effect size on accuracy in terms of the 3D shift and the implant axis inclination. For multiunit restorations with disparallel implants, polyvinyl siloxane materials should be considered. In addition, the effect size of a multivariate investigation should be reported. Copyright © 2017 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  7. Loss aversion, large deviation preferences and optimal portfolio weights for some classes of return processes

    NASA Astrophysics Data System (ADS)

    Duffy, Ken; Lobunets, Olena; Suhov, Yuri

    2007-05-01

    We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.

  8. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    NASA Astrophysics Data System (ADS)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  9. Gait analysis in children with cerebral palsy.

    PubMed

    Armand, Stéphane; Decoulon, Geraldo; Bonnefoy-Mazure, Alice

    2016-12-01

    Cerebral palsy (CP) children present complex and heterogeneous motor disorders that cause gait deviations.Clinical gait analysis (CGA) is needed to identify, understand and support the management of gait deviations in CP. CGA assesses a large amount of quantitative data concerning patients' gait characteristics, such as video, kinematics, kinetics, electromyography and plantar pressure data.Common gait deviations in CP can be grouped into the gait patterns of spastic hemiplegia (drop foot, equinus with different knee positions) and spastic diplegia (true equinus, jump, apparent equinus and crouch) to facilitate communication. However, gait deviations in CP tend to be a continuum of deviations rather than well delineated groups. To interpret CGA, it is necessary to link gait deviations to clinical impairments and to distinguish primary gait deviations from compensatory strategies.CGA does not tell us how to treat a CP patient, but can provide objective identification of gait deviations and further the understanding of gait deviations. Numerous treatment options are available to manage gait deviations in CP. Generally, treatments strive to limit secondary deformations, re-establish the lever arm function and preserve muscle strength.Additional roles of CGA are to better understand the effects of treatments on gait deviations. Cite this article: Armand S, Decoulon G, Bonnefoy-Mazure A. Gait analysis in children with cerebral palsy. EFORT Open Rev 2016;1:448-460. DOI: 10.1302/2058-5241.1.000052.

  10. MetaMQAP: a meta-server for the quality assessment of protein models.

    PubMed

    Pawlowski, Marcin; Gajda, Michal J; Matlak, Ryszard; Bujnicki, Janusz M

    2008-09-29

    Computational models of protein structure are usually inaccurate and exhibit significant deviations from the true structure. The utility of models depends on the degree of these deviations. A number of predictive methods have been developed to discriminate between the globally incorrect and approximately correct models. However, only a few methods predict correctness of different parts of computational models. Several Model Quality Assessment Programs (MQAPs) have been developed to detect local inaccuracies in unrefined crystallographic models, but it is not known if they are useful for computational models, which usually exhibit different and much more severe errors. The ability to identify local errors in models was tested for eight MQAPs: VERIFY3D, PROSA, BALA, ANOLEA, PROVE, TUNE, REFINER, PROQRES on 8251 models from the CASP-5 and CASP-6 experiments, by calculating the Spearman's rank correlation coefficients between per-residue scores of these methods and local deviations between C-alpha atoms in the models vs. experimental structures. As a reference, we calculated the value of correlation between the local deviations and trivial features that can be calculated for each residue directly from the models, i.e. solvent accessibility, depth in the structure, and the number of local and non-local neighbours. We found that absolute correlations of scores returned by the MQAPs and local deviations were poor for all methods. In addition, scores of PROQRES and several other MQAPs strongly correlate with 'trivial' features. Therefore, we developed MetaMQAP, a meta-predictor based on a multivariate regression model, which uses scores of the above-mentioned methods, but in which trivial parameters are controlled. MetaMQAP predicts the absolute deviation (in Angströms) of individual C-alpha atoms between the model and the unknown true structure as well as global deviations (expressed as root mean square deviation and GDT_TS scores). Local model accuracy predicted by MetaMQAP shows an impressive correlation coefficient of 0.7 with true deviations from native structures, a significant improvement over all constituent primary MQAP scores. The global MetaMQAP score is correlated with model GDT_TS on the level of 0.89. Finally, we compared our method with the MQAPs that scored best in the 7th edition of CASP, using CASP7 server models (not included in the MetaMQAP training set) as the test data. In our benchmark, MetaMQAP is outperformed only by PCONS6 and method QA_556 - methods that require comparison of multiple alternative models and score each of them depending on its similarity to other models. MetaMQAP is however the best among methods capable of evaluating just single models. We implemented the MetaMQAP as a web server available for free use by all academic users at the URL https://genesilico.pl/toolkit/

  11. SU-E-J-22: A Feasibility Study On KV-Based Whole Breast Radiation Patient Setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Q; Zhang, M; Yue, N

    Purpose: In room kilovoltage x-ray (kV) imaging provides higher contrast than Megavoltage (MV) imaging with faster acquisition time compared with on-board cone-beam computed tomography (CBCT), thus improving patient setup accuracy and efficiency. In this study we evaluated the clinical feasibility of utilizing kV imaging for whole breast radiation patient setup. Methods: For six breast cancer patients with whole breast treatment plans using two opposed tangential fields, MV-based patient setup was conducted by aligning patient markers with in room lasers and MV portal images. Beam-eye viewed kV images were acquired using Varian OBI system after the set up process. In housemore » software was developed to transfer MLC blocks information overlaying onto kV images to demonstrate the field shape for verification. KV-based patient digital shift was derived by performing rigid registration between kV image and the digitally reconstructed radiography (DRR) to align the bony structure. This digital shift between kV-based and MV-based setup was defined as setup deviation. Results: Six sets of kV images were acquired for breast patients. The mean setup deviation was 2.3mm, 2.2mm and 1.8mm for anterior-posterior, superior-inferior and left-right direction respectively. The average setup deviation magnitude was 4.3±1.7mm for six patients. Patient with large breast had a larger setup deviation (4.4–6.2mm). There was no strong correlation between MV-based shift and setup deviation. Conclusion: A preliminary clinical workflow for kV-based whole breast radiation setup was established and tested. We observed setup deviation of the magnitude below than 5mm. With the benefit of providing higher contrast and MLC block overlaid on the images for treatment field verification, it is feasible to use kV imaging for breast patient setup.« less

  12. Modeling protein conformational changes by iterative fitting of distance constraints using reoriented normal modes.

    PubMed

    Zheng, Wenjun; Brooks, Bernard R

    2006-06-15

    Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.

  13. Image characterization metrics for muon tomography

    NASA Astrophysics Data System (ADS)

    Luo, Weidong; Lehovich, Andre; Anashkin, Edward; Bai, Chuanyong; Kindem, Joel; Sossong, Michael; Steiger, Matt

    2014-05-01

    Muon tomography uses naturally occurring cosmic rays to detect nuclear threats in containers. Currently there are no systematic image characterization metrics for muon tomography. We propose a set of image characterization methods to quantify the imaging performance of muon tomography. These methods include tests of spatial resolution, uniformity, contrast, signal to noise ratio (SNR) and vertical smearing. Simulated phantom data and analysis methods were developed to evaluate metric applicability. Spatial resolution was determined as the FWHM of the point spread functions in X, Y and Z axis for 2.5cm tungsten cubes. Uniformity was measured by drawing a volume of interest (VOI) within a large water phantom and defined as the standard deviation of voxel values divided by the mean voxel value. Contrast was defined as the peak signals of a set of tungsten cubes divided by the mean voxel value of the water background. SNR was defined as the peak signals of cubes divided by the standard deviation (noise) of the water background. Vertical smearing, i.e. vertical thickness blurring along the zenith axis for a set of 2 cm thick tungsten plates, was defined as the FWHM of vertical spread function for the plate. These image metrics provided a useful tool to quantify the basic imaging properties for muon tomography.

  14. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  15. Non-invasive body temperature measurement of wild chimpanzees using fecal temperature decline.

    PubMed

    Jensen, Siv Aina; Mundry, Roger; Nunn, Charles L; Boesch, Christophe; Leendertz, Fabian H

    2009-04-01

    New methods are required to increase our understanding of pathologic processes in wild mammals. We developed a noninvasive field method to estimate the body temperature of wild living chimpanzees habituated to humans, based on statistically fitting temperature decline of feces after defecation. The method was established with the use of control measures of human rectal temperature and subsequent changes in fecal temperature over time. The method was then applied to temperature data collected from wild chimpanzee feces. In humans, we found good correspondence between the temperature estimated by the method and the actual rectal temperature that was measured (maximum deviation 0.22 C). The method was successfully applied and the average estimated temperature of the chimpanzees was 37.2 C. This simple-to-use field method reliably estimates the body temperature of wild chimpanzees and probably also other large mammals.

  16. SU-F-I-47: Optimizing Protocols for Image Quality and Dose in Abdominal CT of Large Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, L; Yester, M

    Purpose: Newer CT scanners are able to use scout views to adjust mA throughout the scan in order to achieve a given noise level. However, given constraints of radiologist preferences for kVp and rotation time, it may not be possible to achieve an acceptable noise level for large patients. A study was initiated to determine for which patients kVp and/or rotation time should be changed in order to achieve acceptable image quality. Methods: Patient scans were reviewed on two new Emergency Department scanners (Philips iCT) to identify patients over a large range of sizes. These iCTs were set with amore » limit of 500 mA to safeguard against a failure that might cause a CT scan to be (incorrectly) obtained at too-high mA. Scout views of these scans were assessed for both AP and LAT patient width and AP and LAT standard deviation in an ROI over the liver. Effective diameter and product of the scout standard deviations over the liver were both studied as possible metrics for identifying patients who would need kVp and/or rotation time changed. The mA used for the liver in the CT was compared to these metrics for those patients whose CT scans showed acceptable image quality. Results: Both effective diameter and product of the scout standard deviations over the liver result in similar predictions for which patients will require the kVp and/or rotation time to be changed to achieve an optimal combination of image quality and dose. Conclusion: Two mechanisms for CT technologists to determine based on scout characteristics what kVp, mA limit, and rotation time to use when DoseRight with our physicians’ preferred kVp and rotation time will not yield adequate image quality are described.« less

  17. Mixed Model Methods for Genomic Prediction and Variance Component Estimation of Additive and Dominance Effects Using SNP Markers

    PubMed Central

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162

  18. Mixed model methods for genomic prediction and variance component estimation of additive and dominance effects using SNP markers.

    PubMed

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.

  19. SU-E-T-546: Use of Implant Volume for Quality Assurance of Low Dose Rate Brachytherapy Treatment Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, D; Kolar, M

    Purpose: To analyze the application of volume implant (V100) data as a method for a global check of low dose rate (LDR) brachytherapy plans. Methods: Treatment plans for 335 consecutive patients undergoing permanent seed implants for prostate cancer and for 113 patients treated with plaque therapy for ocular melanoma were analyzed. Plaques used were 54 COMS (10 to 20 mm, notched and regular) and 59 Eye Physics EP917s with variable loading. Plots of treatment time x implanted activity per unit dose versus v100 ^.667 were made. V100 values were obtained using dose volume histograms calculated by the treatment planning systemsmore » (Variseed 8.02 and Plaque Simulator 5.4). Four different physicists were involved in planning the prostate seed cases; two physicists for the eye plaques. Results: Since the time and dose for the prostate cases did not vary, a plot of implanted activity vs V100 ^.667 was made. A linear fit with no intercept had an r{sup 2} = 0.978; more than 94% of the actual activities fell within 5% of the activities calculated from the linear fit. The greatest deviations were in cases where the implant volumes were large (> 100 cc). Both COMS and EP917 plaque linear fits were good (r{sup 2} = .967 and .957); the largest deviations were seen for large volumes. Conclusions: The method outlined here is effective for checking planning consistency and quality assurance of two types of LDR brachytherapy treatment plans (temporary and permanent). A spreadsheet for the calculations enables a quick check of the plan in situations were time is short (e.g. OR-based prostate planning)« less

  20. Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence

    NASA Astrophysics Data System (ADS)

    Laurie, J.; Bouchet, F.; Zaboronski, O.

    2012-12-01

    We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.

  1. Comparative analysis of the processing accuracy of high strength metal sheets by AWJ, laser and plasma

    NASA Astrophysics Data System (ADS)

    Radu, M. C.; Schnakovszky, C.; Herghelegiu, E.; Tampu, N. C.; Zichil, V.

    2016-08-01

    Experimental tests were carried out on two high-strength steel materials (Ramor 400 and Ramor 550). Quantification of the dimensional accuracy was achieved by measuring the deviations from some geometric parameters of part (two lengths and two radii). It was found that in case of Ramor 400 steel, at the jet inlet, the deviations from the part radii are quite small for all the three analysed processes. Instead for the linear dimensions, the deviations are small only in case of laser cutting. At the jet outlet, the deviations raised in small amount compared to those obtained at the jet inlet for both materials as well as for all the three processes. Related to Ramor 550 steel, at the jet inlet the deviations from the part radii are very small in case of AWJ and laser cutting but larger in case of plasma cutting. At the jet outlet, the deviations from the part radii are very small for all processes; in case of linear dimensions, there was obtained very small deviations only in the case of laser processing, the other two processes leading to very large deviations.

  2. Study on the radial vibration and acoustic field of an isotropic circular ring radiator.

    PubMed

    Lin, Shuyu; Xu, Long

    2012-01-01

    Based on the exact analytical theory, the radial vibration of an isotropic circular ring is studied and its electro-mechanical equivalent circuit is obtained. By means of the equivalent circuit model, the resonance frequency equation is derived; the relationship between the radial resonance frequency, the radial displacement amplitude magnification and the geometrical dimensions, the material property is analyzed. For comparison, numerical method is used to simulate the radial vibration of isotropic circular rings. The resonance frequency and the radial vibrational displacement distribution are obtained, and the radial radiation acoustic field of the circular ring in radial vibration is simulated. It is illustrated that the radial resonance frequencies from the analytical method and the numerical method are in good agreement when the height is much less than the radius. When the height becomes large relative to the radius, the frequency deviation from the two methods becomes large. The reason is that the exact analytical theory is limited to thin circular ring whose height must be much less than its radius. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  4. Spectral Radiance of a Large-Area Integrating Sphere Source

    PubMed Central

    Walker, James H.; Thompson, Ambler

    1995-01-01

    The radiance and irradiance calibration of large field-of-view scanning and imaging radiometers for remote sensing and surveillance applications has resulted in the development of novel calibration techniques. One of these techniques is the employment of large-area integrating sphere sources as radiance or irradiance secondary standards. To assist the National Aeronautical and Space Administration’s space based ozone measurement program, a commercially available large-area internally illuminated integrating sphere source’s spectral radiance was characterized in the wavelength region from 230 nm to 400 nm at the National Institute of Standards and Technology. Spectral radiance determinations and spatial mappings of the source indicate that carefully designed large-area integrating sphere sources can be measured with a 1 % to 2 % expanded uncertainty (two standard deviation estimate) in the near ultraviolet with spatial nonuniformities of 0.6 % or smaller across a 20 cm diameter exit aperture. A method is proposed for the calculation of the final radiance uncertainties of the source which includes the field of view of the instrument being calibrated. PMID:29151725

  5. A fast, automated, polynomial-based cosmic ray spike-removal method for the high-throughput processing of Raman spectra.

    PubMed

    Schulze, H Georg; Turner, Robin F B

    2013-04-01

    Raman spectra often contain undesirable, randomly positioned, intense, narrow-bandwidth, positive, unidirectional spectral features generated when cosmic rays strike charge-coupled device cameras. These must be removed prior to analysis, but doing so manually is not feasible for large data sets. We developed a quick, simple, effective, semi-automated procedure to remove cosmic ray spikes from spectral data sets that contain large numbers of relatively homogenous spectra. Although some inhomogeneous spectral data sets can be accommodated--it requires replacing excessively modified spectra with the originals and removing their spikes with a median filter instead--caution is advised when processing such data sets. In addition, the technique is suitable for interpolating missing spectra or replacing aberrant spectra with good spectral estimates. The method is applied to baseline-flattened spectra and relies on fitting a third-order (or higher) polynomial through all the spectra at every wavenumber. Pixel intensities in excess of a threshold of 3× the noise standard deviation above the fit are reduced to the threshold level. Because only two parameters (with readily specified default values) might require further adjustment, the method is easily implemented for semi-automated processing of large spectral sets.

  6. Large-deviation joint statistics of the finite-time Lyapunov spectrum in isotropic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Perry L., E-mail: pjohns86@jhu.edu; Meneveau, Charles

    2015-08-15

    One of the hallmarks of turbulent flows is the chaotic behavior of fluid particle paths with exponentially growing separation among them while their distance does not exceed the viscous range. The maximal (positive) Lyapunov exponent represents the average strength of the exponential growth rate, while fluctuations in the rate of growth are characterized by the finite-time Lyapunov exponents (FTLEs). In the last decade or so, the notion of Lagrangian coherent structures (which are often computed using FTLEs) has gained attention as a tool for visualizing coherent trajectory patterns in a flow and distinguishing regions of the flow with different mixingmore » properties. A quantitative statistical characterization of FTLEs can be accomplished using the statistical theory of large deviations, based on the so-called Cramér function. To obtain the Cramér function from data, we use both the method based on measuring moments and measuring histograms and introduce a finite-size correction to the histogram-based method. We generalize the existing univariate formalism to the joint distributions of the two FTLEs needed to fully specify the Lyapunov spectrum in 3D flows. The joint Cramér function of turbulence is measured from two direct numerical simulation datasets of isotropic turbulence. Results are compared with joint statistics of FTLEs computed using only the symmetric part of the velocity gradient tensor, as well as with joint statistics of instantaneous strain-rate eigenvalues. When using only the strain contribution of the velocity gradient, the maximal FTLE nearly doubles in magnitude, highlighting the role of rotation in de-correlating the fluid deformations along particle paths. We also extend the large-deviation theory to study the statistics of the ratio of FTLEs. The most likely ratio of the FTLEs λ{sub 1} : λ{sub 2} : λ{sub 3} is shown to be about 4:1:−5, compared to about 8:3:−11 when using only the strain-rate tensor for calculating fluid volume deformations. The results serve to characterize the fundamental statistical and geometric structure of turbulence at small scales including cumulative, time integrated effects. These are important for deformable particles such as droplets and polymers advected by turbulence.« less

  7. Estimating the effects of harmonic voltage fluctuations on the temperature rise of squirrel-cage motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emanuel, A.E.

    1991-03-01

    This article presents a preliminary analysis of the effect of randomly varying harmonic voltages on the temperature rise of squirrel-cage motors. The stochastic process of random variations of harmonic voltages is defined by means of simple statistics (mean, standard deviation, type of distribution). Computational models based on a first-order approximation of the motor losses and on the Monte Carlo method yield results which prove that equipment with large thermal time-constant is capable of withstanding for a short period of time larger distortions than THD = 5%.

  8. Improving the distinguishable cluster results: spin-component scaling

    NASA Astrophysics Data System (ADS)

    Kats, Daniel

    2018-06-01

    The spin-component scaling is employed in the energy evaluation to improve the distinguishable cluster approach. SCS-DCSD reaction energies reproduce reference values with a root-mean-squared deviation well below 1 kcal/mol, the interaction energies are three to five times more accurate than DCSD, and molecular systems with a large amount of static electron correlation are still described reasonably well. SCS-DCSD represents a pragmatic approach to achieve chemical accuracy with a simple method without triples, which can also be applied to multi-configurational molecular systems.

  9. Application of importance sampling to the computation of large deviations in nonequilibrium processes.

    PubMed

    Kundu, Anupam; Sabhapandit, Sanjib; Dhar, Abhishek

    2011-03-01

    We present an algorithm for finding the probabilities of rare events in nonequilibrium processes. The algorithm consists of evolving the system with a modified dynamics for which the required event occurs more frequently. By keeping track of the relative weight of phase-space trajectories generated by the modified and the original dynamics one can obtain the required probabilities. The algorithm is tested on two model systems of steady-state particle and heat transport where we find a huge improvement from direct simulation methods.

  10. Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets

    ERIC Educational Resources Information Center

    Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad

    2017-01-01

    Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…

  11. Development of QuEChERS-based extraction and liquid chromatography-tandem mass spectrometry method for quantifying flumethasone residues in beef muscle.

    PubMed

    Park, Ki Hun; Choi, Jeong-Heui; Abd El-Aty, A M; Cho, Soon-Kil; Park, Jong-Hyouk; Kwon, Ki Sung; Park, Hee Ra; Kim, Hyung Soo; Shin, Ho-Chul; Kim, Mi Ra; Shim, Jae-Han

    2012-12-01

    A rapid, specific, and sensitive method based on liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS) in the positive ion mode using multiple reaction monitoring (MRM) was developed and validated to quantify flumethasone residues in beef muscle. Methods were compared between the original as well as the EN quick, easy, cheap, effective, rugged, and safe (QuEChERS)-based extraction. Good linearity was achieved at concentration levels of 5-30 μg/kg. Estimated recovery rates at spiking levels of 5 and 10 μg/kg ranged from 72.1 to 84.6%, with relative standard deviations (RSDs)<7%. The results of the inter-day study, which was performed by fortifying beef muscle samples (n=18) on 3 separate days, showed an accuracy of 93.4-94.4%. The precision (expressed as relative standard deviation values) for the inter-day variation at two levels of fortification (10 and 20 μg/kg) was 1.9-5.2%. The limit of detection (LOD) and limit of quantitation (LOQ) were 1.7 and 5 μg/kg, at signal-to-noise ratios (S/Ns) of 3 and 10, respectively. The method was successfully applied to analyze real samples obtained from large markets throughout the Korean Peninsula. The method proved to be sensitive and reliable and, thus, rendered an appropriate means for residue analysis studies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Uncertainties of Mayak urine data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir

    2008-01-01

    For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3more » to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.« less

  13. Program helps quickly calculate deviated well path

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, M.P.

    1993-11-22

    A BASIC computer program quickly calculates the angle and measured depth of a simple directional well given only the true vertical depth and total displacement of the target. Many petroleum engineers and geologists need a quick, easy method to calculate the angle and measured depth necessary to reach a target in a proposed deviated well bore. Too many of the existing programs are large and require much input data. The drilling literature is full of equations and methods to calculate the course of well paths from surveys taken after a well is drilled. Very little information, however, covers how tomore » calculate well bore trajectories for proposed wells from limited data. Furthermore, many of the equations are quite complex and difficult to use. A figure lists a computer program with the equations to calculate the well bore trajectory necessary to reach a given displacement and true vertical depth (TVD) for a simple build plant. It can be run on an IBM compatible computer with MS-DOS version 5 or higher, QBasic, or any BASIC that does no require line numbers. QBasic 4.5 compiler will also run the program. The equations are based on conventional geometry and trigonometry.« less

  14. [Study on freshness evaluation of ice-stored large yellow croaker (Pseudosciaena crocea) using near infrared spectroscopy].

    PubMed

    Liu, Yuan; Chen, Wei-Hua; Hou, Qiao-Juan; Wang, Xi-Chang; Dong, Ruo-Yan; Wu, Hao

    2014-04-01

    Near infrared spectroscopy (NIR) was used in this experiment to evaluate the freshness of ice-stored large yellow croaker (Pseudosciaena crocea) during different storage periods. And the TVB-N was used as an index to evaluate the freshness. Through comparing the correlation coefficent and standard deviations of calibration set and validation set of models established by singly and combined using of different pretreatment methods, different modeling methods and different wavelength region, the best TVB-N models of ice-stored large yellow croaker sold in the market were established to predict the freshness quickly. According to the research, the model shows that the best performance could be established by using the normalization by closure (Ncl) with 1st derivative (Dbl) and normalization to unit length (Nle) with 1st derivative as the pretreated method and partial least square (PLS) as the modeling method combined with choosing the wavelength region of 5 000-7 144, and 7 404-10 000 cm(-1). The calibration model gave the correlation coefficient of 0.992, with a standard error of calibration of 1.045 and the validation model gave the correlation coefficient of 0.999, with a standard error of prediction of 0.990. This experiment attempted to combine several pretreatment methods and choose the best wavelength region, which has got a good result. It could have a good prospective application of freshness detection and quality evaluation of large yellow croaker in the market.

  15. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  16. A sequential quadratic programming algorithm using an incomplete solution of the subproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, W.; Prieto, F.J.

    1993-05-01

    We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less

  17. Generation of the pitch moment during the controlled flight after takeoff of fruitflies.

    PubMed

    Chen, Mao Wei; Wu, Jiang Hao; Sun, Mao

    2017-01-01

    In the present paper, the controlled flight of fruitflies after voluntary takeoff is studied. Wing and body kinematics of the insects after takeoff are measured using high-speed video techniques, and the aerodynamic force and moment are calculated by the computational fluid dynamics method based on the measured data. How the control moments are generated is analyzed by correlating the computed moments with the wing kinematics. A fruit-fly has a large pitch-up angular velocity owing to the takeoff jump and the fly controls its body attitude by producing pitching moments. It is found that the pitching moment is produced by changes in both the aerodynamic force and the moment arm. The change in the aerodynamic force is mainly due to the change in angle of attack. The change in the moment arm is mainly due to the change in the mean stroke angle and deviation angle, and the deviation angle plays a more important role than the mean stroke angle in changing the moment arm (note that change in deviation angle implies variation in the position of the aerodynamic stroke plane with respect to the anatomical stroke plane). This is unlike the case of fruitflies correcting pitch perturbations in steady free flight, where they produce pitching moment mainly by changes in mean stroke angle.

  18. Large deviation approach to the generalized random energy model

    NASA Astrophysics Data System (ADS)

    Dorlas, T. C.; Dukes, W. M. B.

    2002-05-01

    The generalized random energy model is a generalization of the random energy model introduced by Derrida to mimic the ultrametric structure of the Parisi solution of the Sherrington-Kirkpatrick model of a spin glass. It was solved exactly in two special cases by Derrida and Gardner. A complete solution for the thermodynamics in the general case was given by Capocaccia et al. Here we use large deviation theory to analyse the model in a very straightforward way. We also show that the variational expression for the free energy can be evaluated easily using the Cauchy-Schwarz inequality.

  19. Large Deviations in Weakly Interacting Boundary Driven Lattice Gases

    NASA Astrophysics Data System (ADS)

    van Wijland, Frédéric; Rácz, Zoltán

    2005-01-01

    One-dimensional, boundary-driven lattice gases with local interactions are studied in the weakly interacting limit. The density profiles and the correlation functions are calculated to first order in the interaction strength for zero-range and short-range processes differing only in the specifics of the detailed-balance dynamics. Furthermore, the effective free-energy (large-deviation function) and the integrated current distribution are also found to this order. From the former, we find that the boundary drive generates long-range correlations only for the short-range dynamics while the latter provides support to an additivity principle recently proposed by Bodineau and Derrida.

  20. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  1. [Comparisons of manual and automatic refractometry with subjective results].

    PubMed

    Wübbolt, I S; von Alven, S; Hülssner, O; Erb, C

    2006-11-01

    Refractometry is very important in everyday clinical practice. The aim of this study is to compare the precision of three objective methods of refractometry with subjective dioptometry (Phoropter). The objective methods with the smallest deviation to subjective refractometry results are evaluated. The objective methods/instruments used were retinoscopy, Prism Refractometer PR 60 (Rodenstock) and Auto Refractometer RM-A 7000 (Topcon). The results of monocular dioptometry (sphere, cylinder and axis) of each objective method were compared to the results of the subjective method. The examination was carried out on 178 eyes, which were divided into 3 age-related groups: 6 - 12 years (103 eyes), 13 - 18 years (38 eyes) and older than 18 years (37 eyes). All measurements were made in cycloplegia. The smallest standard deviation of the measurement error was found for the Auto Refractometer RM-A 7000. Both the PR 60 and retinoscopy had a clearly higher standard deviation. Furthermore, the RM-A 7000 showed in three and retinoscopy in four of the nine comparisons a significant bias in the measurement error. The Auto Refractometer provides measurements with the smallest deviation compared to the subjective method. Here it has to be taken into account that the measurements for the sphere have an average deviation of + 0.2 dpt. In comparison to retinoscopy the examination of children with the RM-A 7000 is difficult. An advantage of the Auto Refractometer is the fast and easy handling, so that measurements can be performed by medical staff.

  2. SU-G-TeP2-04: Comprehensive Machine Isocenter Evaluation with Separation of Gantry, Collimator, and Table Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hancock, S; Clements, C; Hyer, D

    2016-06-15

    Purpose: To develop and demonstrate application of a method that characterizes deviation of linac x-ray beams from the centroid of the volumetric radiation isocenter as a function of gantry, collimator, and table variables. Methods: A set of Winston-Lutz ball-bearing images was used to determine the gantry radiation isocenter as the midrange of deviation values resulting from gantry and collimator rotation. Also determined were displacement of table axis from gantry isocenter and recommended table axis adjustment. The method, previously reported, has been extended to include the effect of collimator walkout by obtaining measurements with 0 and 180 degree collimator rotation formore » each gantry angle. Twelve images were used to characterize the volumetric isocenter for the full range of available gantry, collimator, and table rotations. Results: Three Varian True Beam, two Elekta Infinity and four Versa HD linacs at five institutions were tested using identical methodology. Varian linacs exhibited substantially less deviation due to head sag than Elekta linacs (0.4 mm vs. 1.2 mm on average). One linac from each manufacturer had additional isocenter deviation of 0.3 to 0.4 mm due to jaw instability with gantry and collimator rotation. For all linacs, the achievable isocenter tolerance was dependent on adjustment of collimator position offset, transverse position steering, and alignment of the table axis with gantry isocenter, facilitated by these test results. The pattern and magnitude of table axis wobble vs. table angle was reproducible and unique to each machine. Conclusion: This new method provides a comprehensive set of isocenter deviation values including all variables. It effectively facilitates minimization of deviation between beam center and target (ball-bearing) position. This method was used to quantify the effect of jaw instability on isocenter deviation and to identify the offending jaw. The test is suitable for incorporation into a routine machine QA program. Software development was performed by Radiological Imaging Technology, Inc.« less

  3. The biologic error in gestational length related to the use of the first day of last menstrual period as a proxy for the start of pregnancy.

    PubMed

    Nakling, Jakob; Buhaug, Harald; Backe, Bjorn

    2005-10-01

    In a large unselected population of normal spontaneous pregnancies, to estimate the biologic variation of the interval from the first day of the last menstrual period to start of pregnancy, and the biologic variation of gestational length to delivery; and to estimate the random error of routine ultrasound assessment of gestational age in mid-second trimester. Cohort study of 11,238 singleton pregnancies, with spontaneous onset of labour and reliable last menstrual period. The day of delivery was predicted with two independent methods: According to the rule of Nägele and based on ultrasound examination in gestational weeks 17-19. For both methods, the mean difference between observed and predicted day of delivery was calculated. The variances of the differences were combined to estimate the variances of the two partitions of pregnancy. The biologic variation of the time from last menstrual period to pregnancy start was estimated to 7.0 days (standard deviation), and the standard deviation of the time to spontaneous delivery was estimated to 12.4 days. The estimate of the standard deviation of the random error of ultrasound assessed foetal age was 5.2 days. Even when the last menstrual period is reliable, the biologic variation of the time from last menstrual period to the real start of pregnancy is substantial, and must be taken into account. Reliable information about the first day of the last menstrual period is not equivalent with reliable information about the start of pregnancy.

  4. Non-equilibrium phase transition in mesoscopic biochemical systems: from stochastic to nonlinear dynamics and beyond

    PubMed Central

    Ge, Hao; Qian, Hong

    2011-01-01

    A theory for an non-equilibrium phase transition in a driven biochemical network is presented. The theory is based on the chemical master equation (CME) formulation of mesoscopic biochemical reactions and the mathematical method of large deviations. The large deviations theory provides an analytical tool connecting the macroscopic multi-stability of an open chemical system with the multi-scale dynamics of its mesoscopic counterpart. It shows a corresponding non-equilibrium phase transition among multiple stochastic attractors. As an example, in the canonical phosphorylation–dephosphorylation system with feedback that exhibits bistability, we show that the non-equilibrium steady-state (NESS) phase transition has all the characteristics of classic equilibrium phase transition: Maxwell construction, a discontinuous first-derivative of the ‘free energy function’, Lee–Yang's zero for a generating function and a critical point that matches the cusp in nonlinear bifurcation theory. To the biochemical system, the mathematical analysis suggests three distinct timescales and needed levels of description. They are (i) molecular signalling, (ii) biochemical network nonlinear dynamics, and (iii) cellular evolution. For finite mesoscopic systems such as a cell, motions associated with (i) and (iii) are stochastic while that with (ii) is deterministic. Both (ii) and (iii) are emergent properties of a dynamic biochemical network. PMID:20466813

  5. Robustness and cognition in stabilization problem of dynamical systems based on asymptotic methods

    NASA Astrophysics Data System (ADS)

    Dubovik, S. A.; Kabanov, A. A.

    2017-01-01

    The problem of synthesis of stabilizing systems based on principles of cognitive (logical-dynamic) control for mobile objects used under uncertain conditions is considered. This direction in control theory is based on the principles of guaranteeing robust synthesis focused on worst-case scenarios of the controlled process. The guaranteeing approach is able to provide functioning of the system with the required quality and reliability only at sufficiently low disturbances and in the absence of large deviations from some regular features of the controlled process. The main tool for the analysis of large deviations and prediction of critical states here is the action functional. After the forecast is built, the choice of anti-crisis control is the supervisory control problem that optimizes the control system in a normal mode and prevents escape of the controlled process in critical states. An essential aspect of the approach presented here is the presence of a two-level (logical-dynamic) control: the input data are used not only for generating of synthesized feedback (local robust synthesis) in advance (off-line), but also to make decisions about the current (on-line) quality of stabilization in the global sense. An example of using the presented approach for the problem of development of the ship tilting prediction system is considered.

  6. Oxygen Distributions—Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network

    PubMed Central

    Bernhardt, Peter

    2016-01-01

    Purpose To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. Methods A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham’s line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green’s function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. Results The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. Conclusions The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour. PMID:27861529

  7. Multiple sensor fault diagnosis for dynamic processes.

    PubMed

    Li, Cheng-Chih; Jeng, Jyh-Cheng

    2010-10-01

    Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  8. On-Track Testing as a Validation Method of Computational Fluid Dynamic Simulations of a Formula SAE Vehicle

    NASA Astrophysics Data System (ADS)

    Weingart, Robert

    This thesis is about the validation of a computational fluid dynamics simulation of a ground vehicle by means of a low-budget coast-down test. The vehicle is built to the standards of the 2014 Formula SAE rules. It is equipped with large wings in the front and rear of the car; the vertical loads on the tires are measured by specifically calibrated shock potentiometers. The coast-down test was performed on a runway of a local airport and is used to determine vehicle specific coefficients such as drag, downforce, aerodynamic balance, and rolling resistance for different aerodynamic setups. The test results are then compared to the respective simulated results. The drag deviates about 5% from the simulated to the measured results. The downforce numbers show a deviation up to 18% respectively. Moreover, a sensitivity analysis of inlet velocities, ride heights, and pitch angles was performed with the help of the computational simulation.

  9. Taming Many-Parameter BSM Models with Bayesian Neural Networks

    NASA Astrophysics Data System (ADS)

    Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.

    2017-09-01

    The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.

  10. Cosmological implications of a large complete quasar sample.

    PubMed

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  11. Estimating the brain pathological age of Alzheimer’s disease patients from MR image data based on the separability distance criterion

    NASA Astrophysics Data System (ADS)

    Li, Yongming; Li, Fan; Wang, Pin; Zhu, Xueru; Liu, Shujun; Qiu, Mingguo; Zhang, Jingna; Zeng, Xiaoping

    2016-10-01

    Traditional age estimation methods are based on the same idea that uses the real age as the training label. However, these methods ignore that there is a deviation between the real age and the brain age due to accelerated brain aging. This paper considers this deviation and searches for it by maximizing the separability distance value rather than by minimizing the difference between the estimated brain age and the real age. Firstly, set the search range of the deviation as the deviation candidates according to prior knowledge. Secondly, use the support vector regression (SVR) as the age estimation model to minimize the difference between the estimated age and the real age plus deviation rather than the real age itself. Thirdly, design the fitness function based on the separability distance criterion. Fourthly, conduct age estimation on the validation dataset using the trained age estimation model, put the estimated age into the fitness function, and obtain the fitness value of the deviation candidate. Fifthly, repeat the iteration until all the deviation candidates are involved and get the optimal deviation with maximum fitness values. The real age plus the optimal deviation is taken as the brain pathological age. The experimental results showed that the separability was apparently improved. For normal control-Alzheimer’s disease (NC-AD), normal control-mild cognition impairment (NC-MCI), and MCI-AD, the average improvements were 0.178 (35.11%), 0.033 (14.47%), and 0.017 (39.53%), respectively. For NC-MCI-AD, the average improvement was 0.2287 (64.22%). The estimated brain pathological age could be not only more helpful to the classification of AD but also more precisely reflect accelerated brain aging. In conclusion, this paper offers a new method for brain age estimation that can distinguish different states of AD and can better reflect the extent of accelerated aging.

  12. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    NASA Astrophysics Data System (ADS)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  13. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  14. Oxygen Distributions-Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network.

    PubMed

    Lagerlöf, Jakob H; Bernhardt, Peter

    2016-01-01

    To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.

  15. Exploring the limitations of the Hantzsch method used for quantification of hydroxyl radicals in systems of relevance for interfacial radiation chemistry

    NASA Astrophysics Data System (ADS)

    Yang, Miao; Soroka, Inna; Jonsson, Mats

    2017-01-01

    In the presence of Tris or methanol, hydroxyl radicals in systems of relevance for interfacial radiation chemistry can be quantified indirectly via the Hantzsch method by determining the amount of the scavenging product formaldehyde formed. In this work, the influence of the presence of H2O2 on the Hantzsch method using acetoacetanilide (AAA) as derivatization reagent is studied. The experiments show that the measured CH2O concentration deviates from the actual concentration in the presence of H2O2 and the deviation increases with increasing [H2O2]0/[CH2O]0. The deviation is negative, i.e., the measured formaldehyde concentration is lower than the actual concentration. This leads to an underestimation of the hydroxyl radical production in systems containing significant amount of H2O2. The main reason for the deviation is found to be three coupled equilibria involving H2O2, CH2O and the derivative produced in the Hantzsch method.

  16. Using an external gating signal to estimate noise in PET with an emphasis on tracer avid tumors

    NASA Astrophysics Data System (ADS)

    Schmidtlein, C. R.; Beattie, B. J.; Bailey, D. L.; Akhurst, T. J.; Wang, W.; Gönen, M.; Kirov, A. S.; Humm, J. L.

    2010-10-01

    The purpose of this study is to establish and validate a methodology for estimating the standard deviation of voxels with large activity concentrations within a PET image using replicate imaging that is immediately available for use in the clinic. To do this, ensembles of voxels in the averaged replicate images were compared to the corresponding ensembles in images derived from summed sinograms. In addition, the replicate imaging noise estimate was compared to a noise estimate based on an ensemble of voxels within a region. To make this comparison two phantoms were used. The first phantom was a seven-chamber phantom constructed of 1 liter plastic bottles. Each chamber of this phantom was filled with a different activity concentration relative to the lowest activity concentration with ratios of 1:1, 1:1, 2:1, 2:1, 4:1, 8:1 and 16:1. The second phantom was a GE Well-Counter phantom. These phantoms were imaged and reconstructed on a GE DSTE PET/CT scanner with 2D and 3D reprojection filtered backprojection (FBP), and with 2D- and 3D-ordered subset expectation maximization (OSEM). A series of tests were applied to the resulting images that showed that the region and replicate imaging methods for estimating standard deviation were equivalent for backprojection reconstructions. Furthermore, the noise properties of the FBP algorithms allowed scaling the replicate estimates of the standard deviation by a factor of 1/\\sqrt{N}, where N is the number of replicate images, to obtain the standard deviation of the full data image. This was not the case for OSEM image reconstruction. Due to nonlinearity of the OSEM algorithm, the noise is shown to be both position and activity concentration dependent in such a way that no simple scaling factor can be used to extrapolate noise as a function of counts. The use of the Well-Counter phantom contributed to the development of a heuristic extrapolation of the noise as a function of radius in FBP. In addition, the signal-to-noise ratio for high uptake objects was confirmed to be higher with backprojection image reconstruction methods. These techniques were applied to several patient data sets acquired in either 2D or 3D mode, with 18F (FLT and FDG). Images of the standard deviation and signal-to-noise ratios were constructed and the standard deviations of the tumors' uptake were determined. Finally, a radial noise extrapolation relationship deduced in this paper was applied to patient data.

  17. Detection and quantification system for monitoring instruments

    DOEpatents

    Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA

    2008-08-12

    A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.

  18. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  19. Scaling Deviations for Neutrino Reactions in Aysmptotically Free Field Theories

    DOE R&D Accomplishments Database

    Wilczek, F. A.; Zee, A.; Treiman, S. B.

    1974-11-01

    Several aspects of deep inelastic neutrino scattering are discussed in the framework of asymptotically free field theories. We first consider the growth behavior of the total cross sections at large energies. Because of the deviations from strict scaling which are characteristic of such theories the growth need not be linear. However, upper and lower bounds are established which rather closely bracket a linear growth. We next consider in more detail the expected pattern of scaling deviation for the structure functions and, correspondingly, for the differential cross sections. The analysis here is based on certain speculative assumptions. The focus is on qualitative effects of scaling breakdown as they may show up in the X and y distributions. The last section of the paper deals with deviations from the Callan-Gross relation.

  20. Do Practical Standard Coupled Cluster Calculations Agree Better than Kohn–Sham Calculations with Currently Available Functionals When Compared to the Best Available Experimental Data for Dissociation Energies of Bonds to 3d Transition Metals?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xuefei; Zhang, Wenjing; Tang, Mingsheng

    2015-05-12

    Coupled-cluster (CC) methods have been extensively used as the high-level approach in quantum electronic structure theory to predict various properties of molecules when experimental results are unavailable. It is often assumed that CC methods, if they include at least up to connected-triple-excitation quasiperturbative corrections to a full treatment of single and double excitations (in particular, CCSD(T)), and a very large basis set, are more accurate than Kohn–Sham (KS) density functional theory (DFT). In the present work, we tested and compared the performance of standard CC and KS methods on bond energy calculations of 20 3d transition metal-containing diatomic molecules againstmore » the most reliable experimental data available, as collected in a database called 3dMLBE20. It is found that, although the CCSD(T) and higher levels CC methods have mean unsigned deviations from experiment that are smaller than most exchange-correlation functionals for metal–ligand bond energies of transition metals, the improvement is less than one standard deviation of the mean unsigned deviation. Furthermore, on average, almost half of the 42 exchange-correlation functionals that we tested are closer to experiment than CCSD(T) with the same extended basis set for the same molecule. The results show that, when both relativistic and core–valence correlation effects are considered, even the very high-level (expensive) CC method with single, double, triple, and perturbative quadruple cluster operators, namely, CCSDT(2)Q, averaged over 20 bond energies, gives a mean unsigned deviation (MUD(20) = 4.7 kcal/mol when one correlates only valence, 3p, and 3s electrons of transition metals and only valence electrons of ligands, or 4.6 kcal/mol when one correlates all core electrons except for 1s shells of transition metals, S, and Cl); and that is similar to some good xc functionals (e.g., B97-1 (MUD(20) = 4.5 kcal/mol) and PW6B95 (MUD(20) = 4.9 kcal/mol)) when the same basis set is used. We found that, for both coupled cluster calculations and KS calculations, the T1 diagnostics correlate the errors better than either the M diagnostics or the B1 DFT-based diagnostics. The potential use of practical standard CC methods as a benchmark theory is further confounded by the finding that CC and DFT methods usually have different signs of the error. We conclude that the available experimental data do not provide a justification for using conventional single-reference CC theory calculations to validate or test xc functionals for systems involving 3d transition metals.« less

  1. Experimental investigations on airborne gravimetry based on compressed sensing.

    PubMed

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-03-18

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.

  2. Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing

    PubMed Central

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-01-01

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125

  3. The Stokes-Einstein relation at moderate Schmidt number.

    PubMed

    Balboa Usabiaga, Florencio; Xie, Xiaoyi; Delgado-Buscalioni, Rafael; Donev, Aleksandar

    2013-12-07

    The Stokes-Einstein relation for the self-diffusion coefficient of a spherical particle suspended in an incompressible fluid is an asymptotic result in the limit of large Schmidt number, that is, when momentum diffuses much faster than the particle. When the Schmidt number is moderate, which happens in most particle methods for hydrodynamics, deviations from the Stokes-Einstein prediction are expected. We study these corrections computationally using a recently developed minimally resolved method for coupling particles to an incompressible fluctuating fluid in both two and three dimensions. We find that for moderate Schmidt numbers the diffusion coefficient is reduced relative to the Stokes-Einstein prediction by an amount inversely proportional to the Schmidt number in both two and three dimensions. We find, however, that the Einstein formula is obeyed at all Schmidt numbers, consistent with linear response theory. The mismatch arises because thermal fluctuations affect the drag coefficient for a particle due to the nonlinear nature of the fluid-particle coupling. The numerical data are in good agreement with an approximate self-consistent theory, which can be used to estimate finite-Schmidt number corrections in a variety of methods. Our results indicate that the corrections to the Stokes-Einstein formula come primarily from the fact that the particle itself diffuses together with the momentum. Our study separates effects coming from corrections to no-slip hydrodynamics from those of finite separation of time scales, allowing for a better understanding of widely observed deviations from the Stokes-Einstein prediction in particle methods such as molecular dynamics.

  4. Complexation of Cd, Ni, and Zn by DOC in polluted groundwater: A comparison of approaches using resin exchange, aquifer material sorption, and computer speciation models (WHAM and MINTEQA2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, J.B.; Christensen, T.H.

    1999-11-01

    Complexation of cadmium (Cd), nickel (Ni), and zinc (Zn) by dissolved organic carbon (DOC) in leachate-polluted groundwater was measured using a resin equilibrium method and an aquifer material sorption technique. The first method is commonly used in complexation studies, while the second method better represents aquifer conditions. The two approaches gave similar results. Metal-DOC complexation was measured over a range of DOC concentrations using the resin equilibrium method, and the results were compared to simulations made by two speciation models containing default databases on metal-DOC complexes (WHAM and MINTEQA2). The WHAM model gave reasonable estimates of Cd and Ni complexationmore » by DOC for both leachate-polluted groundwater samples. The estimated effect of complexation differed less than 50% from the experimental values corresponding to a deviation on the activity of the free metal ion of a factor of 2.5. The effect of DOC complexation for Zn was largely overestimated by the WHAM model, and it was found that using a binding constant of 1.7 instead of the default value of 1.3 would improve the fit between the simulations and experimental data. The MINTEQA2 model gave reasonable predictions of the complexation of Cd and Zn by DOC, whereas deviations in the estimated activity of the free Ni{sup 2+} ion as compared to experimental results are up to a factor of 5.« less

  5. SU-F-T-285: Evaluation of a Patient DVH-Based IMRT QA System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, H; Redler, G; Chu, J

    2016-06-15

    Purpose: To evaluate the clinical performance of a patient DVH-based QA system for prostate VMAT QA. Methods: Mobius3D(M3D) is a QA software with an independent beam model and dose engine. The MobiusFX(MFX) add-on predicts patient dose using treatment machine log files. We commissioned the Mobius beam model in two steps. First, the stock beam model was customized using machine commissioning data, then verified against the TPS with 12 simple phantom plans and 7 clinical 3D plans. Secondly, the Dosimetric Leaf Gap(DLG) in the Mobius model was fine-tuned for VMAT treatment based on ion chamber measurements for 6 clinical VMAT plans.more » Upon successful commissioning, we retrospectively performed IMRT QA for 12 VMAT plans with the Mobius system as well as the ArcCHECK-3DVH system. Selected patient DVH values (PTV D95, D50; Bladder D2cc, Dmean; Rectum D2cc) were compared between TPS, M3D, MFX, and 3DVH. Results: During the first commissioning step, TPS and M3D calculated target Dmean for 3D plans agree within 0.7%±0.7%, with 3D gamma passing rates of 98%±2%. In the second commissioning step, the Mobius DLG was adjusted by 1.2mm from the stock value, reducing the average difference between MFX calculation and ion chamber measurement from 3.2% to 0.1%. In retrospective prostate VMAT QA, 5 of 60 MFX calculated DVH values have a deviation greater than 5% compared to TPS. One large deviation at high dose level was identified as a potential QA failure. This echoes the 3DVH QA result, which identified 2 instances of large DVH deviation on the same structure. For all DVH’s evaluated, M3D and MFX show high level of agreement (0.1%±0.2%), indicating that the observed deviation is likely from beam modelling differences rather than delivery errors. Conclusion: Mobius system provides a viable solution for DVH based VMAT QA, with the capability of separating TPS and delivery errors.« less

  6. TRASYS form factor matrix normalization

    NASA Technical Reports Server (NTRS)

    Tsuyuki, Glenn T.

    1992-01-01

    A method has been developed for adjusting a TRASYS enclosure form factor matrix to unity. This approach is not limited to closed geometries, and in fact, it is primarily intended for use with open geometries. The purpose of this approach is to prevent optimistic form factors to space. In this method, nodal form factor sums are calculated within 0.05 of unity using TRASYS, although deviations as large as 0.10 may be acceptable, and then, a process is employed to distribute the difference amongst the nodes. A specific example has been analyzed with this method, and a comparison was performed with a standard approach for calculating radiation conductors. In this comparison, hot and cold case temperatures were determined. Exterior nodes exhibited temperature differences as large as 7 C and 3 C for the hot and cold cases, respectively when compared with the standard approach, while interior nodes demonstrated temperature differences from 0 C to 5 C. These results indicate that temperature predictions can be artificially biased if the form factor computation error is lumped into the individual form factors to space.

  7. Galaxy And Mass Assembly (GAMA): AUTOZ spectral redshift measurements, confidence and errors

    NASA Astrophysics Data System (ADS)

    Baldry, I. K.; Alpaslan, M.; Bauer, A. E.; Bland-Hawthorn, J.; Brough, S.; Cluver, M. E.; Croom, S. M.; Davies, L. J. M.; Driver, S. P.; Gunawardhana, M. L. P.; Holwerda, B. W.; Hopkins, A. M.; Kelvin, L. S.; Liske, J.; López-Sánchez, Á. R.; Loveday, J.; Norberg, P.; Peacock, J.; Robotham, A. S. G.; Taylor, E. N.

    2014-07-01

    The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230 000 targets using the Anglo-Australian Telescope. To homogenize the redshift measurements and improve the reliability, a fully automatic redshift code was developed (AUTOZ). The measurements were made using a cross-correlation method for both the absorption- and the emission-line spectra. Large deviations in the high-pass-filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches on to a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33 km s-1.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eken Tuna, Kevin Mayeda, Abraham Hofstetter, Rengin Gok, Gonca Orgulu, Niyazi Turkelli

    A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, they found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction. After calibrating the stations ISP, ISKB and MALT for local and regional distances, single-station moment-magnitude estimates (M{submore » W}) derived from the coda spectra were in excellent agreement with those determined from multistation waveform modeling inversions, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub W} estimates to significantly smaller events which could not otherwise be waveform modeled. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less

  9. On the Geometry of Chemical Reaction Networks: Lyapunov Function and Large Deviations

    NASA Astrophysics Data System (ADS)

    Agazzi, A.; Dembo, A.; Eckmann, J.-P.

    2018-04-01

    In an earlier paper, we proved the validity of large deviations theory for the particle approximation of quite general chemical reaction networks. In this paper, we extend its scope and present a more geometric insight into the mechanism of that proof, exploiting the notion of spherical image of the reaction polytope. This allows to view the asymptotic behavior of the vector field describing the mass-action dynamics of chemical reactions as the result of an interaction between the faces of this polytope in different dimensions. We also illustrate some local aspects of the problem in a discussion of Wentzell-Freidlin theory, together with some examples.

  10. Accuracy of computer-aided design models of the jaws produced using ultra-low MDCT doses and ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Alfadda, Sara A; Ameen, Wadea; Hörmann, Romed; Puelacher, Wolfgang; Widmann, Gerlig

    2018-06-16

    To compare the surface of computer-aided design (CAD) models of the maxilla produced using ultra-low MDCT doses combined with filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) reconstruction techniques with that produced from a standard dose/FBP protocol. A cadaveric completely edentulous maxilla was imaged using a standard dose protocol (CTDIvol: 29.4 mGy) and FBP, in addition to 5 low dose test protocols (LD1-5) (CTDIvol: 4.19, 2.64, 0.99, 0.53, and 0.29 mGy) reconstructed with FBP, ASIR 50, ASIR 100, and MBIR. A CAD model from each test protocol was superimposed onto the reference model using the 'Best Fit Alignment' function. Differences between the test and reference models were analyzed as maximum and mean deviations, and root-mean-square of the deviations, and color-coded models were obtained which demonstrated the location, magnitude and direction of the deviations. Based upon the magnitude, size, and distribution of areas of deviations, CAD models from the following protocols were comparable to the reference model: FBP/LD1; ASIR 50/LD1 and LD2; ASIR 100/LD1, LD2, and LD3; MBIR/LD1. The following protocols demonstrated deviations mostly between 1-2 mm or under 1 mm but over large areas, and so their effect on surgical guide accuracy is questionable: FBP/LD2; MBIR/LD2, LD3, LD4, and LD5. The following protocols demonstrated large deviations over large areas and therefore were not comparable to the reference model: FBP/LD3, LD4, and LD5; ASIR 50/LD3, LD4, and LD5; ASIR 100/LD4, and LD5. When MDCT is used for CAD models of the jaws, dose reductions of 86% may be possible with FBP, 91% with ASIR 50, and 97% with ASIR 100. Analysis of the stability and accuracy of CAD/CAM surgical guides as directly related to the jaws is needed to confirm the results.

  11. Relationships between sudden weather changes in summer and mortality in the Czech Republic, 1986-2005

    NASA Astrophysics Data System (ADS)

    Plavcová, Eva; Kyselý, Jan

    2010-09-01

    The study examines the relationship between sudden changes in weather conditions in summer, represented by (1) sudden air temperature changes, (2) sudden atmospheric pressure changes, and (3) passages of strong atmospheric fronts; and variations in daily mortality in the population of the Czech Republic. The events are selected from data covering 1986-2005 and compared with the database of daily excess all-cause mortality for the whole population and persons aged 70 years and above. Relative deviations of mortality, i.e., ratios of the excess mortality to the expected number of deaths, were averaged over the selected events for days D-2 (2 days before a change) up to D+7 (7 days after), and their statistical significance was tested by means of the Monte Carlo method. We find that the periods around weather changes are associated with pronounced patterns in mortality: a significant increase in mortality is found after large temperature increases and on days of large pressure drops; a decrease in mortality (partly due to a harvesting effect) occurs after large temperature drops, pressure increases, and passages of strong cold fronts. The relationship to variations in excess mortality is better expressed for sudden air temperature/pressure changes than for passages of atmospheric fronts. The mortality effects are usually more pronounced in the age group 70 years and above. The impacts associated with large negative changes of pressure are statistically independent of the effects of temperature; the corresponding dummy variable is found to be a significant predictor in the ARIMA model for relative deviations of mortality. This suggests that sudden weather changes should be tested also in time series models for predicting excess mortality as they may enhance their performance.

  12. EXACTRAC x-ray and beam isocenters-what's the difference?

    PubMed

    Tideman Arp, Dennis; Carl, Jesper

    2012-03-01

    To evaluate the geometric accuracy of the isocenter of an image-guidance system, as implemented in the exactrac system from brainlab, relative to the linear accelerator radiation isocenter. Subsequently to correct the x-ray isocenter of the exactrac system for any geometric discrepancies between the two isocenters. Five Varian linear accelerators all equipped with electronic imaging devices and exactrac with robotics from brainlab were evaluated. A commercially available Winston-Lutz phantom and an in-house made adjustable base were used in the setup. The electronic portal imaging device of the linear accelerators was used to acquire MV-images at various gantry angles. Stereoscopic pairs of x-ray images were acquired using the exactrac system. The deviation between the position of the external laser isocenter and the exactrac isocenter was evaluated using the commercial software of the exactrac system. In-house produced software was used to analyze the MV-images and evaluate the deviation between the external laser isocenter and the radiation isocenter of the linear accelerator. Subsequently, the deviation between the radiation isocenter and the isocenter of the exactrac system was calculated. A new method of calibrating the isocenter of the exactrac system was applied to reduce the deviations between the radiation isocenter and the exactrac isocenter. To evaluate the geometric accuracy a 3D deviation vector was calculated for each relative isocenter position. The 3D deviation between the external laser isocenter and the isocenter of the exactrac system varied from 0.21 to 0.42 mm. The 3D deviation between the external laser isocenter and the linac radiation isocenter ranged from 0.37 to 0.83 mm. The 3D deviation between the radiation isocenter and the isocenter of the exactrac system ranged from 0.31 to 1.07 mm. Using the new method of calibrating the exactrac isocenter the 3D deviation of one linac was reduced from 0.90 to 0.23 mm. The results were complicated due to routine maintenance of the linac, including laser calibration. It was necessary to repeat the measurements in order to perform the calibration of the exactrac isocenter. The deviations between the linac radiation isocenter and the exactrac isocenter were of an order that may have clinical relevance. An alternative method of calibrating the isocenter of the exactrac system was applied and reduced the deviations between the two isocenters.

  13. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  14. Influence of particle size distribution on nanopowder cold compaction processes

    NASA Astrophysics Data System (ADS)

    Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.

    2017-06-01

    Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.

  15. Spatial organization of chromatin domains and compartments in single chromosomes

    NASA Astrophysics Data System (ADS)

    Wang, Siyuan; Su, Jun-Han; Beliveau, Brian; Bintu, Bogdan; Moffitt, Jeffrey; Wu, Chao-Ting; Zhuang, Xiaowei

    The spatial organization of chromatin critically affects genome function. Recent chromosome-conformation-capture studies have revealed topologically associating domains (TADs) as a conserved feature of chromatin organization, but how TADs are spatially organized in individual chromosomes remains unknown. Here, we developed an imaging method for mapping the spatial positions of numerous genomic regions along individual chromosomes and traced the positions of TADs in human interphase autosomes and X chromosomes. We observed that chromosome folding deviates from the ideal fractal-globule model at large length scales and that TADs are largely organized into two compartments spatially arranged in a polarized manner in individual chromosomes. Active and inactive X chromosomes adopt different folding and compartmentalization configurations. These results suggest that the spatial organization of chromatin domains can change in response to regulation.

  16. Large scale structure formation of the normal branch in the DGP brane world model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Yong-Seon

    2008-06-15

    In this paper, we study the large scale structure formation of the normal branch in the DGP model (Dvail, Gabadadze, and Porrati brane world model) by applying the scaling method developed by Sawicki, Song, and Hu for solving the coupled perturbed equations of motion of on-brane and off-brane. There is a detectable departure of perturbed gravitational potential from the cold dark matter model with vacuum energy even at the minimal deviation of the effective equation of state w{sub eff} below -1. The modified perturbed gravitational potential weakens the integrated Sachs-Wolfe effect which is strengthened in the self-accelerating branch DGP model.more » Additionally, we discuss the validity of the scaling solution in the de Sitter limit at late times.« less

  17. Offshore fatigue design turbulence

    NASA Astrophysics Data System (ADS)

    Larsen, Gunner C.

    2001-07-01

    Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.

  18. Ranking and validation of spallation models for isotopic production cross sections of heavy residua

    NASA Astrophysics Data System (ADS)

    Sharma, Sushil K.; Kamys, Bogusław; Goldenbaum, Frank; Filges, Detlef

    2017-07-01

    The production cross sections of isotopically identified residual nuclei of spallation reactions induced by 136Xe projectiles at 500AMeV on hydrogen target were analyzed in a two-step model. The first stage of the reaction was described by the INCL4.6 model of an intranuclear cascade of nucleon-nucleon and pion-nucleon collisions whereas the second stage was analyzed by means of four different models; ABLA07, GEM2, GEMINI++ and SMM. The quality of the data description was judged quantitatively using two statistical deviation factors; the H-factor and the M-factor. It was found that the present analysis leads to a different ranking of models as compared to that obtained from the qualitative inspection of the data reproduction. The disagreement was caused by sensitivity of the deviation factors to large statistical errors present in some of the data. A new deviation factor, the A factor, was proposed, that is not sensitive to the statistical errors of the cross sections. The quantitative ranking of models performed using the A-factor agreed well with the qualitative analysis of the data. It was concluded that using the deviation factors weighted by statistical errors may lead to erroneous conclusions in the case when the data cover a large range of values. The quality of data reproduction by the theoretical models is discussed. Some systematic deviations of the theoretical predictions from the experimental results are observed.

  19. Observations of Precipitation Size and Fall Speed Characteristics within Coexisting Rain and Wet Snow

    NASA Technical Reports Server (NTRS)

    Yuter, Sandra E.; Kingsmill, David E.; Nance, Louisa B.; Loeffler-Mang, Martin

    2006-01-01

    Ground-based measurements of particle size and fall speed distributions using a Particle Size and Velocity (PARSIVEL) disdrometer are compa red among samples obtained in mixed precipitation (rain and wet snow) and rain in the Oregon Cascade Mountains and in dry snow in the Rock y Mountains of Colorado. Coexisting rain and snow particles are distinguished using a classification method based on their size and fall sp eed properties. The bimodal distribution of the particles' joint fall speed-size characteristics at air temperatures from 0.5 to 0 C suggests that wet-snow particles quickly make a transition to rain once mel ting has progressed sufficiently. As air temperatures increase to 1.5 C, the reduction in the number of very large aggregates with a diame ter > 10 mm coincides with the appearance of rain particles larger than 6 mm. In this setting. very large raindrops appear to be the result of aggregates melting with minimal breakup rather than formation by c oalescence. In contrast to dry snow and rain, the fall speed for wet snow has a much weaker correlation between increasing size and increasing fall speed. Wet snow has a larger standard deviation of fall spee d (120%-230% relative to dry snow) for a given particle size. The ave rage fall speed for observed wet-snow particles with a diameter great er than or equal to 2.4 mm is 2 m/s with a standard deviation of 0.8 m/s. The large standard deviation is likely related to the coexistence of particles of similar physical size with different percentages of melting. These results suggest that different particle sizes are not required for aggregation since wet-snow particles of the same size can have different fall speeds. Given the large standard deviation of fa ll speeds in wet snow, the collision efficiency for wet snow is likely larger than that of dry snow. For particle sizes between 1 and 10 mm in diameter within mixed precipitation, rain constituted I % of the particles by volume within the isothermal layer at 0 C and 4% of the particles by volume for the region just below the isothermal layer where air temperatures rise from 0" to 0.5"C. As air temperatures increa sed above 0.5 C, the relative proportions of rain versus snow particl es shift dramatically and raindrops become dominant. The value of 0.5 C for the sharp transition in volume fraction from snow to rain is sl ightly lower than the range from 1 .l to 1.7 C often used in hydrolog ical models.

  20. Analysis of variances of quasirapidities in collisions of gold nuclei with track-emulsion nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulamov, K. G.; Zhokhova, S. I.; Lugovoi, V. V., E-mail: lugovoi@uzsci.net

    2012-08-15

    A new method of an analysis of variances was developed for studying n-particle correlations of quasirapidities in nucleus-nucleus collisions for a large constant number n of particles. Formulas that generalize the results of the respective analysis to various values of n were derived. Calculations on the basis of simple models indicate that the method is applicable, at least for n {>=} 100. Quasirapidity correlations statistically significant at a level of 36 standard deviations were discovered in collisions between gold nuclei and track-emulsion nuclei at an energy of 10.6 GeV per nucleon. The experimental data obtained in our present study aremore » contrasted against the theory of nucleus-nucleus collisions.« less

  1. Comparison of MRI segmentation techniques for measuring liver cyst volumes in autosomal dominant polycystic kidney disease.

    PubMed

    Farooq, Zerwa; Behzadi, Ashkan Heshmatzadeh; Blumenfeld, Jon D; Zhao, Yize; Prince, Martin R

    To compare MRI segmentation methods for measuring liver cyst volumes in autosomal dominant polycystic kidney disease (ADPKD). Liver cyst volumes in 42 ADPKD patients were measured using region growing, thresholding and cyst diameter techniques. Manual segmentation was the reference standard. Root mean square deviation was 113, 155, and 500 for cyst diameter, thresholding and region growing respectively. Thresholding error for cyst volumes below 500ml was 550% vs 17% for cyst volumes above 500ml (p<0.001). For measuring volume of a small number of cysts, cyst diameter and manual segmentation methods are recommended. For severe disease with numerous, large hepatic cysts, thresholding is an acceptable alternative. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Can we obtain the coefficient of restitution from the sound of a bouncing ball?

    NASA Astrophysics Data System (ADS)

    Heckel, Michael; Glielmo, Aldo; Gunkelmann, Nina; Pöschel, Thorsten

    2016-03-01

    The coefficient of restitution may be determined from the sound signal emitted by a sphere bouncing repeatedly off the ground. Although there is a large number of publications exploiting this method, so far, there is no quantitative discussion of the error related to this type of measurement. Analyzing the main error sources, we find that even tiny deviations of the shape from the perfect sphere may lead to substantial errors that dominate the overall error of the measurement. Therefore, we come to the conclusion that the well-established method to measure the coefficient of restitution through the emitted sound is applicable only for the case of nearly perfect spheres. For larger falling height, air drag may lead to considerable error, too.

  3. Grain-Boundary Resistance in Copper Interconnects: From an Atomistic Model to a Neural Network

    NASA Astrophysics Data System (ADS)

    Valencia, Daniel; Wilson, Evan; Jiang, Zhengping; Valencia-Zapata, Gustavo A.; Wang, Kuang-Chung; Klimeck, Gerhard; Povolotskyi, Michael

    2018-04-01

    Orientation effects on the specific resistance of copper grain boundaries are studied systematically with two different atomistic tight-binding methods. A methodology is developed to model the specific resistance of grain boundaries in the ballistic limit using the embedded atom model, tight- binding methods, and nonequilibrium Green's functions. The methodology is validated against first-principles calculations for thin films with a single coincident grain boundary, with 6.4% deviation in the specific resistance. A statistical ensemble of 600 large, random structures with grains is studied. For structures with three grains, it is found that the distribution of specific resistances is close to normal. Finally, a compact model for grain-boundary-specific resistance is constructed based on a neural network.

  4. Forty-five degree cutting septoplasty.

    PubMed

    Hsiao, Yen-Chang; Chang, Chun-Shin; Chuang, Shiow-Shuh; Kolios, Georgios; Abdelrahman, Mohamed

    2016-01-01

    The crooked nose represents a challenge for rhinoplasty surgeons, and many methods have been proposed for management; however, there is no ideal method for treatment. Accordingly, the 45° cutting septoplasty technique, which involves a 45° cut at the junction of the L-shaped strut and repositioning it to achieve a straight septum is proposed. From October 2010 to September 2014, 43 patients underwent the 45° cutting septoplasty technique. There were 28 men and 15 women, with ages ranging from 20 to 58 years (mean, 33 years). Standardized photographs were obtained at every visit. Established photogrammetric parameters were used to describe the degree of correction: Correction rate = (preoperative total deviation - postoperative residual deviation)/preoperative total deviation × 100% was proposed. The mean follow-up period for all patients was 12.3 months. The mean preoperative deviation was 64.3° and the mean postoperative deviation was 2.7°; the overall correction rate was 95.8%. One patient experienced composite implant deviation two weeks postoperatively and underwent revision rhinoplasty. There were no infections, hematomas or postoperative bleeding. Based on the clinical observation of all patients during the follow-up period, the 45° cutting septoplasty technique was shown to be effective for the treatment of crooked nose.

  5. Fluctuating hydrodynamics, current fluctuations, and hyperuniformity in boundary-driven open quantum chains

    NASA Astrophysics Data System (ADS)

    Carollo, Federico; Garrahan, Juan P.; Lesanovsky, Igor; Pérez-Espigares, Carlos

    2017-11-01

    We consider a class of either fermionic or bosonic noninteracting open quantum chains driven by dissipative interactions at the boundaries and study the interplay of coherent transport and dissipative processes, such as bulk dephasing and diffusion. Starting from the microscopic formulation, we show that the dynamics on large scales can be described in terms of fluctuating hydrodynamics. This is an important simplification as it allows us to apply the methods of macroscopic fluctuation theory to compute the large deviation (LD) statistics of time-integrated currents. In particular, this permits us to show that fermionic open chains display a third-order dynamical phase transition in LD functions. We show that this transition is manifested in a singular change in the structure of trajectories: while typical trajectories are diffusive, rare trajectories associated with atypical currents are ballistic and hyperuniform in their spatial structure. We confirm these results by numerically simulating ensembles of rare trajectories via the cloning method, and by exact numerical diagonalization of the microscopic quantum generator.

  6. Fluctuating hydrodynamics, current fluctuations, and hyperuniformity in boundary-driven open quantum chains.

    PubMed

    Carollo, Federico; Garrahan, Juan P; Lesanovsky, Igor; Pérez-Espigares, Carlos

    2017-11-01

    We consider a class of either fermionic or bosonic noninteracting open quantum chains driven by dissipative interactions at the boundaries and study the interplay of coherent transport and dissipative processes, such as bulk dephasing and diffusion. Starting from the microscopic formulation, we show that the dynamics on large scales can be described in terms of fluctuating hydrodynamics. This is an important simplification as it allows us to apply the methods of macroscopic fluctuation theory to compute the large deviation (LD) statistics of time-integrated currents. In particular, this permits us to show that fermionic open chains display a third-order dynamical phase transition in LD functions. We show that this transition is manifested in a singular change in the structure of trajectories: while typical trajectories are diffusive, rare trajectories associated with atypical currents are ballistic and hyperuniform in their spatial structure. We confirm these results by numerically simulating ensembles of rare trajectories via the cloning method, and by exact numerical diagonalization of the microscopic quantum generator.

  7. The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu

    2016-07-01

    Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.

  8. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  9. Numerical simulation of flow and mass transfer for large KDP crystal growth via solution-jet method

    NASA Astrophysics Data System (ADS)

    Yin, Huawei; Li, Mingwei; Hu, Zhitao; Zhou, Chuan; Li, Zhiwei

    2018-06-01

    A novel technique of growing large crystals of potassium dihydrogen phosphate (KDP) named solution-jet method is proposed. The aim is to increase supersaturation on the pyramidal face, especially for crystal surface regions close to the rotation axis. The fluid flow and surface supersaturation distribution of crystals grown under different conditions were computed using the finite-volume method. Results indicate that the time-averaged supersaturation of the pyramidal face in the proposed method significantly increases and the supersaturation difference from the crystal center to edge clearly decreases compared with the rotating-crystal method. With increased jet velocity, supersaturation on the pyramidal face steadily increases. Rotation rate considerably affects the magnitude and distribution of the prismatic surface supersaturation. With increased crystal size, the mean value of surface supersaturation averaged over the pyramid gradually decreases; conversely, standard deviation increases, which is detrimental to crystal growth. Moreover, the significant roles played by natural and forced convection in the process of mass transport are discussed. Results show that further increased jet velocity to 0.6 m/s renders negligible the effects of natural convection around the pyramid. The simulation for step propagation indicates that solution-jet method can promote a steady step migration and enhance surface morphology stability, which can improve the crystal quality.

  10. Thin Disk Accretion in the Magnetically-Arrested State

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; McKinney, Jonathan; Reynolds, Christopher S.

    2016-01-01

    Shakura-Sunyaev thin disk theory is fundamental to black hole astrophysics. Though applications of the theory are wide-spread and powerful tools for explaining observations, such as Soltan's argument using quasar power, broadened iron line measurements, continuum fitting, and recently reverberation mapping, a significant large-scale magnetic field causes substantial deviations from standard thin disk behavior. We have used fully 3D general relativistic MHD simulations with cooling to explore the thin (H/R~0.1) magnetically arrested disk (MAD) state and quantify these deviations. This work demonstrates that accumulation of large-scale magnetic flux into the MAD state is possible, and then extends prior numerical studies of thicker disks, allowing us to measure how jet power scales with the disk state, providing a natural explanation of phenomena like jet quenching in the high-soft state of X-ray binaries. We have also simulated thin MAD disks with a misaligned black hole spin axis in order to understand further deviations from thin disk theory that may significantly affect observations.

  11. Self-optimizing Pitch Control for Large Scale Wind Turbine Based on ADRC

    NASA Astrophysics Data System (ADS)

    Xia, Anjun; Hu, Guoqing; Li, Zheng; Huang, Dongxiao; Wang, Fengxiang

    2018-01-01

    Since wind turbine is a complex nonlinear and strong coupling system, traditional PI control method can hardly achieve good control performance. A self-optimizing pitch control method based on the active-disturbance-rejection control theory is proposed in this paper. A linear model of the wind turbine is derived by linearizing the aerodynamic torque equation and the dynamic response of wind turbine is transformed into a first-order linear system. An expert system is designed to optimize the amplification coefficient according to the pitch rate and the speed deviation. The purpose of the proposed control method is to regulate the amplification coefficient automatically and keep the variations of pitch rate and rotor speed in proper ranges. Simulation results show that the proposed pitch control method has the ability to modify the amplification coefficient effectively, when it is not suitable, and keep the variations of pitch rate and rotor speed in proper ranges

  12. Method development for the analysis of N-nitrosodimethylamine and other N-nitrosamines in drinking water at low nanogram/liter concentrations using solid-phase extraction and gas chromatography with chemical ionization tandem mass spectrometry.

    PubMed

    Munch, Jean W; Bassett, Margarita V

    2006-01-01

    N-nitrosodimethylamine (NDMA) is a probable human carcinogen of concern that has been identified as a drinking water contaminant. U.S. Environmental Protection Agency Method 521 has been developed for the analysis of NDMA and 6 additional N-nitrosamines in drinking water at low ng/L concentrations. The method uses solid-phase extraction with coconut charcoal as the sorbent and dichloromethane as the eluent to concentrate 0.50 L water samples to 1 mL. The extracts are analyzed by gas chromatography-chemical ionization tandem mass spectrometry using large-volume injection. Method performance was evaluated in 2 laboratories. Typical analyte recoveries of 87-104% were demonstrated for fortified reagent water samples, and recoveries of 77-106% were demonstrated for fortified drinking water samples. All relative standard deviations on replicate analyses were < 11%.

  13. Couch height–based patient setup for abdominal radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohira, Shingo; Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita; Ueda, Yoshihiro

    2016-04-01

    There are 2 methods commonly used for patient positioning in the anterior-posterior (A-P) direction: one is the skin mark patient setup method (SMPS) and the other is the couch height–based patient setup method (CHPS). This study compared the setup accuracy of these 2 methods for abdominal radiation therapy. The enrollment for this study comprised 23 patients with pancreatic cancer. For treatments (539 sessions), patients were set up by using isocenter skin marks and thereafter treatment couch was shifted so that the distance between the isocenter and the upper side of the treatment couch was equal to that indicated on themore » computed tomographic (CT) image. Setup deviation in the A-P direction for CHPS was measured by matching the spine of the digitally reconstructed radiograph (DRR) of a lateral beam at simulation with that of the corresponding time-integrated electronic portal image. For SMPS with no correction (SMPS/NC), setup deviation was calculated based on the couch-level difference between SMPS and CHPS. SMPS/NC was corrected using 2 off-line correction protocols: no action level (SMPS/NAL) and extended NAL (SMPS/eNAL) protocols. Margins to compensate for deviations were calculated using the Stroom formula. A-P deviation > 5 mm was observed in 17% of SMPS/NC, 4% of SMPS/NAL, and 4% of SMPS/eNAL sessions but only in one CHPS session. For SMPS/NC, 7 patients (30%) showed deviations at an increasing rate of > 0.1 mm/fraction, but for CHPS, no such trend was observed. The standard deviations (SDs) of systematic error (Σ) were 2.6, 1.4, 0.6, and 0.8 mm and the root mean squares of random error (σ) were 2.1, 2.6, 2.7, and 0.9 mm for SMPS/NC, SMPS/NAL, SMPS/eNAL, and CHPS, respectively. Margins to compensate for the deviations were wide for SMPS/NC (6.7 mm), smaller for SMPS/NAL (4.6 mm) and SMPS/eNAL (3.1 mm), and smallest for CHPS (2.2 mm). Achieving better setup with smaller margins, CHPS appears to be a reproducible method for abdominal patient setup.« less

  14. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  15. Center of pressure based segment inertial parameters validation

    PubMed Central

    Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane

    2017-01-01

    By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090

  16. Analysis of Androgenic Steroids in Environmental Waters by Large-volume Injection Liquid Chromatography Tandem Mass Spectrometry

    PubMed Central

    Backe, Will J.; Ort, Christoph; Brewer, Alex J.; Field, Jennifer A.

    2014-01-01

    A new method was developed for the analysis of natural and synthetic androgenic steroids and their selected metabolites in aquatic environmental matrices using direct large-volume injection (LVI) high performance liquid chromatography (HPLC) tandem mass spectrometry (MS/MS). Method accuracy ranged from 88 to 108% for analytes with well-matched internal standards. Precision, quantified by relative standard deviation (RSD), was less than 12%. Detection limits for the method ranged from 1.2 to 360 ng/L. The method was demonstrated on a series of 1-hr composite wastewater influent samples collected over a day with the purpose of assessing temporal profiles of androgen loads in wastewater. Testosterone, androstenedione, boldenone, and nandrolone were detected in the sample series at concentrations up to 290 ng/L and loads up to 535 mg. Boldenone, a synthetic androgen, had a temporal profile that was strongly correlated to testosterone, a natural human androgen, suggesting its source may be endogenous. An analysis of the sample particulate fraction revealed detectable amounts of sorbed testosterone and androstenedione. Androstenedione sorbed to the particulate fraction accounted for an estimated five to seven percent of the total androstenedione mass. PMID:21391574

  17. Analysis of androgenic steroids in environmental waters by large-volume injection liquid chromatography tandem mass spectrometry.

    PubMed

    Backe, Will J; Ort, Christoph; Brewer, Alex J; Field, Jennifer A

    2011-04-01

    A new method was developed for the analysis of natural and synthetic androgenic steroids and their selected metabolites in aquatic environmental matrixes using direct large-volume injection (LVI) high-performance liquid chromatography (HPLC) tandem mass spectrometry (MS/MS). Method accuracy ranged from 87.6 to 108% for analytes with well-matched internal standards. Precision, quantified by relative standard deviation (RSD), was less than 12%. Detection limits for the method ranged from 1.2 to 360 ng/L. The method was demonstrated on a series of 1 h composite wastewater influent samples collected over a day with the purpose of assessing temporal profiles of androgen loads in wastewater. Testosterone, androstenedione, boldenone, and nandrolone were detected in the sample series at concentrations up to 290 ng/L and loads up to 535 mg/h. Boldenone, a synthetic androgen, had a temporal profile that was strongly correlated to testosterone, a natural human androgen, suggesting its source may be endogenous. An analysis of the sample particulate fraction revealed detectable amounts of sorbed testosterone and androstenedione. Androstenedione sorbed to the particulate fraction accounted for an estimated 5 to 7% of the total androstenedione mass.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, Scott F.; Linder, Eric V.; Lawrence Berkeley National Laboratory, Berkeley, California

    Deviations from general relativity, such as could be responsible for the cosmic acceleration, would influence the growth of large-scale structure and the deflection of light by that structure. We clarify the relations between several different model-independent approaches to deviations from general relativity appearing in the literature, devising a translation table. We examine current constraints on such deviations, using weak gravitational lensing data of the CFHTLS and COSMOS surveys, cosmic microwave background radiation data of WMAP5, and supernova distance data of Union2. A Markov chain Monte Carlo likelihood analysis of the parameters over various redshift ranges yields consistency with general relativitymore » at the 95% confidence level.« less

  19. [A new method to test vertical ocular deviations using perilimbal light reflexes].

    PubMed

    Breyer, Armin; Rütsche, Adrian; Gampe, Elisabeth; Mojon, Daniel S

    2003-03-01

    To develop a new diagnostic technique to determine vertical ocular deviations when the center of the pupil is covered by swollen eyelids in up- and downgaze. In upgaze (downgaze) the reflex of a diagnostic lamp held at about 50 cm distance from the patient is observed on the lower (upper) limbus. In the case of an asymmetric reflex, prisms are used to obtain symmetrical reflexes. The amount of prisms indicates the size of the vertical misalignment. In five healthy volunteers, the angles of vertical changes of gaze position were plotted against the prism size needed to recenter the perilimbal reflex. There was a linear correlation between the amount of upgaze changes in degrees and the strength of prisms used for compensation in degrees. This linear correlation was also found in downgaze. For both the correlation coefficient was r = 0.98 +/- 0.01. In upgaze the slope of the average regression line was 0.55 +/- 2.3 degrees, in downgaze - 4.1 +/- 0.8 degrees. A prism of 1 degrees corresponds in upgaze to a vertical deviation of about 1.3 +/- 0.14 degrees, in downgaze to a deviation of about 1.1 +/- 0.07 degrees. These results demonstrate that the perilimbal light reflex test is suitable for measuring simulated vertical ocular deviations. Therefore, the test may also be used in patients with vertical deviations who cannot be measured with classical methods. The method is more exact for measurements in upgaze.

  20. Toward High-Level Theoretical Studies of Large Biodiesel Molecules: An ONIOM [QCISD(T)/CBS:DFT] Study of the Reactions between Unsaturated Methyl Esters (C nH2 n-1COOCH3) and Hydrogen Radical.

    PubMed

    Zhang, Lidong; Meng, Qinghui; Chi, Yicheng; Zhang, Peng

    2018-05-31

    A two-layer ONIOM[QCISD(T)/CBS:DFT] method was proposed for the high-level single-point energy calculations of large biodiesel molecules and was validated for the hydrogen abstraction reactions of unsaturated methyl esters that are important components of real biodiesel. The reactions under investigation include all the reactions on the potential energy surface of C n H 2 n-1 COOCH 3 ( n = 2-5, 17) + H, including the hydrogen abstraction, the hydrogen addition, the isomerization (intramolecular hydrogen shift), and the β-scission reactions. By virtue of the introduced concept of chemically active center, a unified specification of chemically active portion for the ONIOM (ONIOM = our own n-layered integrated molecular orbital and molecular mechanics) method was proposed to account for the additional influence of C═C double bond. The predicted energy barriers and heats of reaction by using the ONIOM method are in very good agreement with those obtained by using the widely accepted high-level QCISD(T)/CBS theory, as verified by the computational deviations being less than 0.15 kcal/mol, for almost all the reaction pathways under investigation. The method provides a computationally accurate and affordable approach to combustion chemists for high-level theoretical chemical kinetics of large biodiesel molecules.

  1. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  2. Investigation of converging and collimated beam instrument geometry on specular gloss measurements

    NASA Astrophysics Data System (ADS)

    Zwinkels, Joanne C.; Côté, Éric; Morgan, John

    2018-02-01

    Specular gloss is an important appearance property of a wide variety of manufactured goods. Depending upon the application, e.g. paints, paper, ceramics, etc. different instrument designs and measurement geometries are specified in standard test methods. For a given specular angle, these instrument designs can be broadly classified as converging beam (TAPPI method) and collimated beam (DIN method). In recent comparisons of specular gloss measurements using different glossmeters, very large standard deviations have been reported, well exceeding the manufacturers claims. In this paper, we investigate the effect of instrument beam geometry on gloss measurements. These results indicate that this difference in beam geometry can give the magnitude of gloss differences reported in these comparisons and highlights the importance of educating the user community of best measurement practices and obtaining appropriate traceability for their glossmeters.

  3. Rotor Position Sensorless Control and Its Parameter Sensitivity of Permanent Magnet Motor Based on Model Reference Adaptive System

    NASA Astrophysics Data System (ADS)

    Ohara, Masaki; Noguchi, Toshihiko

    This paper describes a new method for a rotor position sensorless control of a surface permanent magnet synchronous motor based on a model reference adaptive system (MRAS). This method features the MRAS in a current control loop to estimate a rotor speed and position by using only current sensors. This method as well as almost all the conventional methods incorporates a mathematical model of the motor, which consists of parameters such as winding resistances, inductances, and an induced voltage constant. Hence, the important thing is to investigate how the deviation of these parameters affects the estimated rotor position. First, this paper proposes a structure of the sensorless control applied in the current control loop. Next, it proves the stability of the proposed method when motor parameters deviate from the nominal values, and derives the relationship between the estimated position and the deviation of the parameters in a steady state. Finally, some experimental results are presented to show performance and effectiveness of the proposed method.

  4. Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images.

    PubMed

    Larsson, Anne; Johansson, Adam; Axelsson, Jan; Nyholm, Tufve; Asklund, Thomas; Riklund, Katrine; Karlsson, Mikael

    2013-02-01

    The aim of this study was to evaluate MR-based attenuation correction of PET emission data of the head, based on a previously described technique that calculates substitute CT (sCT) images from a set of MR images. Images from eight patients, examined with (18)F-FLT PET/CT and MRI, were included. sCT images were calculated and co-registered to the corresponding CT images, and transferred to the PET/CT scanner for reconstruction. The new reconstructions were then compared with the originals. The effect of replacing bone with soft tissue in the sCT-images was also evaluated. The average relative difference between the sCT-corrected PET images and the CT-corrected PET images was 1.6% for the head and 1.9% for the brain. The average standard deviations of the relative differences within the head were relatively high, at 13.2%, primarily because of large differences in the nasal septa region. For the brain, the average standard deviation was lower, 4.1%. The global average difference in the head when replacing bone with soft tissue was 11%. The method presented here has a high rate of accuracy, but high-precision quantitative imaging of the nasal septa region is not possible at the moment.

  5. Springback effects during single point incremental forming: Optimization of the tool path

    NASA Astrophysics Data System (ADS)

    Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick

    2018-05-01

    Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.

  6. Automatic estimation of voice onset time for word-initial stops by applying random forest to onset detection.

    PubMed

    Lin, Chi-Yueh; Wang, Hsiao-Chuan

    2011-07-01

    The voice onset time (VOT) of a stop consonant is the interval between its burst onset and voicing onset. Among a variety of research topics on VOT, one that has been studied for years is how VOTs are efficiently measured. Manual annotation is a feasible way, but it becomes a time-consuming task when the corpus size is large. This paper proposes an automatic VOT estimation method based on an onset detection algorithm. At first, a forced alignment is applied to identify the locations of stop consonants. Then a random forest based onset detector searches each stop segment for its burst and voicing onsets to estimate a VOT. The proposed onset detection can detect the onsets in an efficient and accurate manner with only a small amount of training data. The evaluation data extracted from the TIMIT corpus were 2344 words with a word-initial stop. The experimental results showed that 83.4% of the estimations deviate less than 10 ms from their manually labeled values, and 96.5% of the estimations deviate by less than 20 ms. Some factors that influence the proposed estimation method, such as place of articulation, voicing of a stop consonant, and quality of succeeding vowel, were also investigated. © 2011 Acoustical Society of America

  7. Time reversal for localization of sources of infrasound signals in a windy stratified atmosphere.

    PubMed

    Lonzaga, Joel B

    2016-06-01

    Time reversal is used for localizing sources of recorded infrasound signals propagating in a windy, stratified atmosphere. Due to the convective effect of the background flow, the back-azimuths of the recorded signals can be substantially different from the source back-azimuth, posing a significant difficulty in source localization. The back-propagated signals are characterized by negative group velocities from which the source back-azimuth and source-to-receiver (STR) distance can be estimated using the apparent back-azimuths and trace velocities of the signals. The method is applied to several distinct infrasound arrivals recorded by two arrays in the Netherlands. The infrasound signals were generated by the Buncefield oil depot explosion in the U.K. in December 2005. Analyses show that the method can be used to substantially enhance estimates of the source back-azimuth and the STR distance. In one of the arrays, for instance, the deviations between the measured back-azimuths of the signals and the known source back-azimuth are quite large (-1° to -7°), whereas the deviations between the predicted and known source back-azimuths are small with an absolute mean value of <1°. Furthermore, the predicted STR distance is off only by <5% of the known STR distance.

  8. Cosmological implications of a large complete quasar sample

    PubMed Central

    Segal, I. E.; Nicoll, J. F.

    1998-01-01

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423–1460]. The Expanding Universe model as represented by the Friedman–Lemaitre cosmology with parameters qo = 0, Λ = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude–redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11σ from direct observation; none of the C2 predictions deviate by >2σ. The C1 deviations may be reconciled with theory by the hypothesis of quasar “evolution,” which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact. PMID:9560182

  9. Divine proportions in attractive and nonattractive faces.

    PubMed

    Pancherz, Hans; Knapp, Verena; Erbe, Christina; Heiss, Anja Melina

    2010-01-01

    To test Ricketts' 1982 hypothesis that facial beauty is measurable by comparing attractive and nonattractive faces of females and males with respect to the presence of the divine proportions. The analysis of frontal view facial photos of 90 cover models (50 females, 40 males) from famous fashion magazines and of 34 attractive (29 females, five males) and 34 nonattractive (13 females, 21 males) persons selected from a group of former orthodontic patients was carried out in this study. Based on Ricketts' method, five transverse and seven vertical facial reference distances were measured and compared with the corresponding calculated divine distances expressed in phi-relationships (f=1.618). Furthermore, transverse and vertical facial disproportion indices were created. For both the models and patients, all the reference distances varied largely from respective divine values. The average deviations ranged from 0.3% to 7.8% in the female groups of models and attractive patients with no difference between them. In the male groups of models and attractive patients, the average deviations ranged from 0.2% to 11.2%. When comparing attractive and nonattractive female, as well as male, patients, deviations from the divine values for all variables were larger in the nonattractive sample. Attractive individuals have facial proportions closer to the divine values than nonattractive ones. In accordance with the hypothesis of Ricketts, facial beauty is measurable to some degree. COPYRIGHT © 2009 BY QUINTESSENCE PUBLISHING CO, INC.

  10. I-125 seed calibration using the SeedSelectron® afterloader: a practical solution to fulfill AAPM-ESTRO recommendations

    PubMed Central

    Perez-Calatayud, Jose; Richart, Jose; Guirado, Damián; Pérez-García, Jordi; Rodríguez, Silvia; Santos, Manuel

    2012-01-01

    Purpose SeedSelectron® v1.26b (Nucletron BV, The Netherlands) is an afterloader system used in prostate interstitial permanent brachytherapy with I-125 selectSeed seeds. It contains a diode array to assay all implanted seeds. Only one or two seeds can be extracted during the surgical procedure and assayed using a well chamber to check the manufacturer air-kerma strength (SK) and to calibrate the diode array. Therefore, it is not feasible to assay 5–10% seeds as required by the AAPM-ESTRO. In this study, we present a practical solution of the SeedSelectron® users to fulfill the AAPM- ESTRO recommendations. Material and methods The method is based on: a) the SourceCheck® well ionization chamber (PTW, Germany) provided with a PTW insert; b) n = 10 selectSeed from the same batch and class as the seeds for the implant; c) the Nucletron insert to accommodate the n = 10 seeds on the SourceCheck® and to measure their averaged SK. Results for 56 implants have been studied comparing the SK value from the manufacturer with the one obtained with the n = 10 seeds using the Nucletron insert prior to the implant and with the SK of just one seed measured with the PTW insert during the implant. Results We are faced with SK deviation for individual seeds up to 7.8%. However, in the majority of cases SK is in agreement with the manufacturer value. With the method proposed using the Nucletron insert, the large deviations of SK are reduced and for 56 implants studied no deviation outside the range of the class were found. Conclusions The new Nucletron insert and the proposed procedure allow to evaluate the SK of the n = 10 seeds prior to the implant, fulfilling the AAPM-ESTRO recommendations. It has been adopted by Nucletron to be extended to seedSelectron® users under request. PMID:23346136

  11. The energy balance experiment EBEX-2000. Part II: Intercomparison of eddy-covariance sensors and post-field data processing methods

    NASA Astrophysics Data System (ADS)

    Mauder, Matthias; Oncley, Steven P.; Vogt, Roland; Weidinger, Tamas; Ribeiro, Luis; Bernhofer, Christian; Foken, Thomas; Kohsiek, Wim; de Bruin, Henk A. R.; Liu, Heping

    2007-04-01

    The eddy-covariance method is the primary way of measuring turbulent fluxes directly. Many investigators have found that these flux measurements often do not satisfy a fundamental criterion—closure of the surface energy balance. This study investigates to what extent the eddy-covariance measurement technology can be made responsible for this deficiency, in particular the effects of instrumentation or of the post-field data processing. Therefore, current eddy-covariance sensors and several post-field data processing methods were compared. The differences in methodology resulted in deviations of 10% for the sensible heat flux and of 15% for the latent heat flux for an averaging time of 30 min. These disparities were mostly due to different sensor separation corrections and a linear detrending of the data. The impact of different instrumentation on the resulting heat flux estimates was significantly higher. Large deviations from the reference system of up to 50% were found for some sensor combinations. However, very good measurement quality was found for a CSAT3 sonic together with a KH20 krypton hygrometer and also for a UW sonic together with a KH20. If these systems are well calibrated and maintained, an accuracy of better than 5% can be achieved for 30-min values of sensible and latent heat flux measurements. The results from the sonic anemometers Gill Solent-HS, ATI-K, Metek USA-1, and R.M. Young 81000 showed more or less larger deviations from the reference system. The LI-COR LI-7500 open-path H2O/CO2 gas analyser in the test was one of the first serial numbers of this sensor type and had technical problems regarding direct solar radiation sensitivity and signal delay. These problems are known by the manufacturer and improvements of the sensor have since been made.

  12. The impact of the fabrication method on the three-dimensional accuracy of an implant surgery template.

    PubMed

    Matta, Ragai-Edward; Bergauer, Bastian; Adler, Werner; Wichmann, Manfred; Nickenig, Hans-Joachim

    2017-06-01

    The use of a surgical template is a well-established method in advanced implantology. In addition to conventional fabrication, computer-aided design and computer-aided manufacturing (CAD/CAM) work-flow provides an opportunity to engineer implant drilling templates via a three-dimensional printer. In order to transfer the virtual planning to the oral situation, a highly accurate surgical guide is needed. The aim of this study was to evaluate the impact of the fabrication method on the three-dimensional accuracy. The same virtual planning based on a scanned plaster model was used to fabricate a conventional thermo-formed and a three-dimensional printed surgical guide for each of 13 patients (single tooth implants). Both templates were acquired individually on the respective plaster model using an optical industrial white-light scanner (ATOS II, GOM mbh, Braunschweig, Germany), and the virtual datasets were superimposed. Using the three-dimensional geometry of the implant sleeve, the deviation between both surgical guides was evaluated. The mean discrepancy of the angle was 3.479° (standard deviation, 1.904°) based on data from 13 patients. Concerning the three-dimensional position of the implant sleeve, the highest deviation was in the Z-axis at 0.594 mm. The mean deviation of the Euclidian distance, dxyz, was 0.864 mm. Although the two different fabrication methods delivered statistically significantly different templates, the deviations ranged within a decimillimeter span. Both methods are appropriate for clinical use. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  13. Soiling of building envelope surfaces and its effect on solar reflectance – Part III: Interlaboratory study of an accelerated aging method for roofing materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sleiman, Mohamad; Chen, Sharon; Gilbert, Haley E.

    A laboratory method to simulate natural exposure of roofing materials has been reported in a companion article. Here in the current article, we describe the results of an international, nine-participant interlaboratory study (ILS) conducted in accordance with ASTM Standard E691-09 to establish the precision and reproducibility of this protocol. The accelerated soiling and weathering method was applied four times by each laboratory to replicate coupons of 12 products representing a wide variety of roofing categories (single-ply membrane, factory-applied coating (on metal), bare metal, field-applied coating, asphalt shingle, modified-bitumen cap sheet, clay tile, and concrete tile). Participants reported initial and laboratory-agedmore » values of solar reflectance and thermal emittance. Measured solar reflectances were consistent within and across eight of the nine participating laboratories. Measured thermal emittances reported by six participants exhibited comparable consistency. For solar reflectance, the accelerated aging method is both repeatable and reproducible within an acceptable range of standard deviations: the repeatability standard deviation sr ranged from 0.008 to 0.015 (relative standard deviation of 1.2–2.1%) and the reproducibility standard deviation sR ranged from 0.022 to 0.036 (relative standard deviation of 3.2–5.8%). The ILS confirmed that the accelerated aging method can be reproduced by multiple independent laboratories with acceptable precision. In conclusion, this study supports the adoption of the accelerated aging practice to speed the evaluation and performance rating of new cool roofing materials.« less

  14. Visual field progression in glaucoma: total versus pattern deviation analyses.

    PubMed

    Artes, Paul H; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C

    2005-12-01

    To compare visual field progression with total and pattern deviation analyses in a prospective longitudinal study of patients with glaucoma and healthy control subjects. A group of 101 patients with glaucoma (168 eyes) with early to moderately advanced visual field loss at baseline (average mean deviation [MD], -3.9 dB) and no clinical evidence of media opacity were selected from a prospective longitudinal study on visual field progression in glaucoma. Patients were examined with static automated perimetry at 6-month intervals for a median follow-up of 9 years. At each test location, change was established with event and trend analyses of total and pattern deviation. The event analyses compared each follow-up test to a baseline obtained from averaging the first two tests, and visual field progression was defined as deterioration beyond the 5th percentile of test-retest variability at three test locations, observed on three consecutive tests. The trend analyses were based on point-wise linear regression, and visual field progression was defined as statistically significant deterioration (P < 5%) worse than -1 dB/year at three locations, confirmed by independently omitting the last and the penultimate observation. The incidence and the time-to-progression were compared between total and pattern deviation analyses. To estimate the specificity of the progression analyses, identical criteria were applied to visual fields obtained in 102 healthy control subjects, and the rate of visual field improvement was established in the patients with glaucoma and the healthy control subjects. With both event and trend methods, pattern deviation analyses classified approximately 15% fewer eyes as having progressed than did the total deviation analyses. In eyes classified as progressing by both the total and pattern deviation methods, total deviation analyses tended to detect progression earlier than the pattern deviation analyses. A comparison of the changes observed in MD and the visual fields' general height (estimated by the 85th percentile of the total deviation values) confirmed that change in the glaucomatous eyes almost always comprised a diffuse component. Pattern deviation analyses of progression may therefore underestimate the true amount of glaucomatous visual field progression. Pattern deviation analyses of visual field progression may underestimate visual field progression in glaucoma, particularly when there is no clinical evidence of increasing media opacity. Clinicians should have access to both total and pattern deviation analyses to make informed decisions on visual field progression in glaucoma.

  15. [The highest proportion of tobacco materials in the blend analysis using PPF projection method for the near-infrared spectrum and Monte Carlo method].

    PubMed

    Mi, Jin-Rui; Ma, Xiang; Zhang, Ya-Juan; Wang, Yi; Wen, Ya-Dong; Zhao, Long-Lian; Li, Jun-Hui; Zhang, Lu-Da

    2011-04-01

    The present paper builds a model based on Monte Carlo method in the projection of the blending tobacco. This model is made up of two parts: the projecting points of tobacco materials, whose coordinates are calculated by means of the PPF (projection based on principal component and Fisher criterion) projection method for the tobacco near-infrared spectrum; and the point of tobacco blend, which is produced by linear additive to the projecting point coordinates of tobacco materials. In order to analyze the projection points deviation from initial state levels, Monte Carlo method is introduced to simulate the differences and changes of raw material projection. The results indicate that there are two major factors affecting the relative deviation: the highest proportion of tobacco materials in the blend, which is too high to make the deviation under control; and the quantity of materials, which is so small to control the deviation. The conclusion is close to the principle of actual formulating designing, particularly, the more in the quantity while the lower in proportion of each. Finally the paper figures out the upper limit of the proportions in the different quantity of materials by theory. It also has important reference value for other agricultural products blend.

  16. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  17. Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.

    PubMed

    Bowman, Richard G; Caraway, David; Bentley, Ishmael

    2013-01-01

    Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.

  18. Measuring (subglacial) bedform orientation, length, and longitudinal asymmetry - Method assessment.

    PubMed

    Jorge, Marco G; Brennand, Tracy A

    2017-01-01

    Geospatial analysis software provides a range of tools that can be used to measure landform morphometry. Often, a metric can be computed with different techniques that may give different results. This study is an assessment of 5 different methods for measuring longitudinal, or streamlined, subglacial bedform morphometry: orientation, length and longitudinal asymmetry, all of which require defining a longitudinal axis. The methods use the standard deviational ellipse (not previously applied in this context), the longest straight line fitting inside the bedform footprint (2 approaches), the minimum-size footprint-bounding rectangle, and Euler's approximation. We assess how well these methods replicate morphometric data derived from a manually mapped (visually interpreted) longitudinal axis, which, though subjective, is the most typically used reference. A dataset of 100 subglacial bedforms covering the size and shape range of those in the Puget Lowland, Washington, USA is used. For bedforms with elongation > 5, deviations from the reference values are negligible for all methods but Euler's approximation (length). For bedforms with elongation < 5, most methods had small mean absolute error (MAE) and median absolute deviation (MAD) for all morphometrics and thus can be confidently used to characterize the central tendencies of their distributions. However, some methods are better than others. The least precise methods are the ones based on the longest straight line and Euler's approximation; using these for statistical dispersion analysis is discouraged. Because the standard deviational ellipse method is relatively shape invariant and closely replicates the reference values, it is the recommended method. Speculatively, this study may also apply to negative-relief, and fluvial and aeolian bedforms.

  19. Time-resolved dosimetry using a pinpoint ionization chamber as quality assurance for IMRT and VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louwe, Robert J. W., E-mail: rob.louwe@ccdbh.org.nz; Satherley, Thomas; Day, Rebecca A.

    Purpose: To develop a method to verify the dose delivery in relation to the individual control points of intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) using an ionization chamber. In addition to more effective problem solving during patient-specific quality assurance (QA), the aim is to eventually map out the limitations in the treatment chain and enable a targeted improvement of the treatment technique in an efficient way. Methods: Pretreatment verification was carried out for 255 treatment plans that included a broad range of treatment indications in two departments using the equipment of different vendors. In-house developed softwaremore » was used to enable calculation of the dose delivery for the individual beamlets in the treatment planning system (TPS), for data acquisition, and for analysis of the data. The observed deviations were related to various delivery and measurement parameters such as gantry angle, field size, and the position of the detector with respect to the field edge to distinguish between error sources. Results: The average deviation of the integral fraction dose during pretreatment verification of the planning target volume dose was −2.1% ± 2.2% (1 SD), −1.7% ± 1.7% (1 SD), and 0.0% ± 1.3% (1 SD) for IMRT at the Radboud University Medical Center (RUMC), VMAT (RUMC), and VMAT at the Wellington Blood and Cancer Centre, respectively. Verification of the dose to organs at risk gave very similar results but was generally subject to a larger measurement uncertainty due to the position of the detector at a high dose gradient. The observed deviations could be related to limitations of the TPS beam models, attenuation of the treatment couch, as well as measurement errors. The apparent systematic error of about −2% in the average deviation of the integral fraction dose in the RUMC results could be explained by the limitations of the TPS beam model in the calculation of the beam penumbra. Conclusions: This study showed that time-resolved dosimetry using an ionization chamber is feasible and can be largely automated which limits the required additional time compared to integrated dose measurements. It provides a unique QA method which enables identification and quantification of the contribution of various error sources during IMRT and VMAT delivery.« less

  20. Inclusive Search for a Highly Boosted Higgs Boson Decaying to a Bottom Quark-Antiquark Pair

    NASA Astrophysics Data System (ADS)

    Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Ambrogi, F.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Grossmann, J.; Hrubec, J.; Jeitler, M.; König, A.; Krammer, N.; Krätschmer, I.; Liko, D.; Madlener, T.; Mikulec, I.; Pree, E.; Rad, N.; Rohringer, H.; Schieck, J.; Schöfbeck, R.; Spanring, M.; Spitzbart, D.; Waltenberger, W.; Wittmann, J.; Wulz, C.-E.; Zarucki, M.; Chekhovsky, V.; Dydyshka, Y.; Suarez Gonzalez, J.; De Wolf, E. A.; Di Croce, D.; Janssen, X.; Lauwers, J.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; De Bruyn, I.; De Clercq, J.; Deroover, K.; Flouris, G.; Lontkovskyi, D.; Lowette, S.; Moortgat, S.; Moreels, L.; Python, Q.; Skovpen, K.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Parijs, I.; Beghin, D.; Brun, H.; Clerbaux, B.; De Lentdecker, G.; Delannoy, H.; Dorney, B.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Luetic, J.; Maerschalk, T.; Marinov, A.; Randle-conde, A.; Seva, T.; Starling, E.; Vander Velde, C.; Vanlaer, P.; Vannerom, D.; Yonamine, R.; Zenoni, F.; Zhang, F.; Cimmino, A.; Cornelis, T.; Dobur, D.; Fagot, A.; Gul, M.; Khvastunov, I.; Poyraz, D.; Roskas, C.; Salva, S.; Tytgat, M.; Verbeke, W.; Zaganidis, N.; Bakhshiansohi, H.; Bondu, O.; Brochet, S.; Bruno, G.; Caputo, C.; Caudron, A.; David, P.; De Visscher, S.; Delaere, C.; Delcourt, M.; Francois, B.; Giammanco, A.; Komm, M.; Krintiras, G.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Musich, M.; Piotrzkowski, K.; Quertenmont, L.; Saggio, A.; Vidal Marono, M.; Wertz, S.; Zobec, J.; Beliy, N.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Coelho, E.; Da Costa, E. M.; Da Silveira, G. G.; De Jesus Damiao, D.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Melo De Almeida, M.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Sanchez Rosas, L. J.; Santoro, A.; Sznajder, A.; Thiel, M.; Tonelli Manganote, E. J.; Torres Da Silva De Araujo, F.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Misheva, M.; Rodozov, M.; Shopova, M.; Sultanov, G.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Fang, W.; Gao, X.; Yuan, L.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Chen, Y.; Jiang, C. H.; Leggat, D.; Liao, H.; Liu, Z.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Yazgan, E.; Zhang, H.; Zhang, S.; Zhao, J.; Ban, Y.; Chen, G.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; González Hernández, C. F.; Ruiz Alvarez, J. D.; Courbon, B.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Sculac, T.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Ferencek, D.; Kadija, K.; Mesic, B.; Starodumov, A.; Susa, T.; Ather, M. W.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Finger, M.; Finger, M.; Carrera Jarrin, E.; Assran, Y.; Mahmoud, M. A.; Mahrous, A.; Dewanjee, R. K.; Kadastik, M.; Perrini, L.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Kirschenmann, H.; Pekkanen, J.; Voutilainen, M.; Järvinen, T.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Tuominen, E.; Tuominiemi, J.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Faure, J. L.; Ferri, F.; Ganjour, S.; Ghosh, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Kucher, I.; Leloup, C.; Locci, E.; Machet, M.; Malcles, J.; Negro, G.; Rander, J.; Rosowsky, A.; Sahin, M. Ö.; Titov, M.; Abdulsalam, A.; Amendola, C.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Charlot, C.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Lobanov, A.; Martin Blanco, J.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Stahl Leiton, A. G.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Zghiche, A.; Agram, J.-L.; Andrea, J.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Jansová, M.; Le Bihan, A.-C.; Tonon, N.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fay, J.; Finco, L.; Gascon, S.; Gouzevitch, M.; Grenier, G.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Popov, A.; Sordini, V.; Vander Donckt, M.; Viret, S.; Khvedelidze, A.; Tsamalaidze, Z.; Autermann, C.; Feld, L.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Preuten, M.; Schomakers, C.; Schulz, J.; Verlage, T.; Zhukov, V.; Albert, A.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hamer, M.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Teyssier, D.; Thüer, S.; Flügge, G.; Kargoll, B.; Kress, T.; Künsken, A.; Lingemann, J.; Müller, T.; Nehrkorn, A.; Nowack, A.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Arndt, T.; Asawatangtrakuldee, C.; Beernaert, K.; Behnke, O.; Behrens, U.; Bermúdez Martínez, A.; Bin Anuar, A. A.; Borras, K.; Botta, V.; Campbell, A.; Connor, P.; Contreras-Campana, C.; Costanza, F.; Diez Pardos, C.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Eren, E.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Grados Luyando, J. M.; Grohsjean, A.; Gunnellini, P.; Guthoff, M.; Harb, A.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Kasemann, M.; Keaveney, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Lelek, A.; Lenz, T.; Leonard, J.; Lipka, K.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Ntomari, E.; Pitzl, D.; Raspereza, A.; Roland, B.; Savitskyi, M.; Saxena, P.; Shevchenko, R.; Spannagel, S.; Stefaniuk, N.; Van Onsem, G. P.; Walsh, R.; Wen, Y.; Wichmann, K.; Wissing, C.; Zenaiev, O.; Aggleton, R.; Bein, S.; Blobel, V.; Centis Vignali, M.; Dreyer, T.; Garutti, E.; Gonzalez, D.; Haller, J.; Hinzmann, A.; Hoffmann, M.; Karavdina, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Kurz, S.; Lapsien, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Niedziela, M.; Nowatschin, D.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Scharf, C.; Schleper, P.; Schmidt, A.; Schumann, S.; Schwandt, J.; Sonneveld, J.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Stöver, M.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Akbiyik, M.; Barth, C.; Baur, S.; Butz, E.; Caspart, R.; Chwalek, T.; Colombo, F.; De Boer, W.; Dierlamm, A.; Freund, B.; Friese, R.; Giffels, M.; Haitz, D.; Harrendorf, M. A.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Kassel, F.; Kudella, S.; Mildner, H.; Mozer, M. U.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Schröder, M.; Shvetsov, I.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Topsis-Giotis, I.; Karathanasis, G.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Kousouris, K.; Evangelou, I.; Foudas, C.; Kokkas, P.; Mallios, S.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Triantis, F. A.; Csanad, M.; Filipovic, N.; Pasztor, G.; Surányi, O.; Veres, G. I.; Bencze, G.; Hajdu, C.; Horvath, D.; Hunyadi, Á.; Sikler, F.; Veszpremi, V.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Makovec, A.; Molnar, J.; Szillasi, Z.; Bartók, M.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Choudhury, S.; Komaragiri, J. R.; Bahinipati, S.; Bhowmik, S.; Mal, P.; Mandal, K.; Nayak, A.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Dhingra, N.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kaur, S.; Kumar, R.; Kumari, P.; Mehta, A.; Singh, J. B.; Walia, G.; Kumar, Ashok; Shah, Aashaq; Bhardwaj, A.; Chauhan, S.; Choudhary, B. C.; Garg, R. B.; Keshri, S.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, R.; Bhardwaj, R.; Bhattacharya, R.; Bhattacharya, S.; Bhawandeep, U.; Dey, S.; Dutt, S.; Dutta, S.; Ghosh, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Nandan, S.; Purohit, A.; Roy, A.; Roy, D.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Thakur, S.; Behera, P. K.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Netrakanti, P. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Dugad, S.; Mahakud, B.; Mitra, S.; Mohanty, G. B.; Sur, N.; Sutar, B.; Banerjee, S.; Bhattacharya, S.; Chatterjee, S.; Das, P.; Guchait, M.; Jain, Sa.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Sarkar, T.; Wickramage, N.; Chauhan, S.; Dube, S.; Hegde, V.; Kapoor, A.; Kothekar, K.; Pandey, S.; Rane, A.; Sharma, S.; Chenarani, S.; Eskandari Tadavani, E.; Etesami, S. M.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Errico, F.; Fiore, L.; Iaselli, G.; Lezki, S.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Bonacorsi, D.; Borgonovi, L.; Braibant-Giacomelli, S.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Albergo, S.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Chatterjee, K.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Lenzi, P.; Meschini, M.; Paoletti, S.; Russo, L.; Sguazzoni, G.; Strom, D.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Robutti, E.; Tosi, S.; Benaglia, A.; Brianza, L.; Brivio, F.; Ciriolo, V.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Govoni, P.; Malberti, M.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pauwels, K.; Pedrini, D.; Pigazzini, S.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Fabozzi, F.; Fienga, F.; Iorio, A. O. M.; Khan, W. A.; Lista, L.; Meola, S.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Biasotto, M.; Bisello, D.; Boletti, A.; Carlin, R.; Carvalho Antunes De Oliveira, A.; Checchia, P.; Dall'Osso, M.; De Castro Manzano, P.; Dorigo, T.; Gasparini, U.; Gozzelino, A.; Lacaprara, S.; Lujan, P.; Margoni, M.; Meneguzzo, A. T.; Pozzobon, N.; Ronchese, P.; Rossin, R.; Simonetto, F.; Torassa, E.; Ventura, S.; Zanetti, M.; Zotto, P.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Ressegotti, M.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Biasini, M.; Bilei, G. M.; Cecchi, C.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Leonardi, R.; Manoni, E.; Mantovani, G.; Mariani, V.; Menichelli, M.; Rossi, A.; Santocchia, A.; Spiga, D.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Boccali, T.; Borrello, L.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Fedi, G.; Giannini, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Manca, E.; Mandorli, G.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Cipriani, M.; Daci, N.; Del Re, D.; Di Marco, E.; Diemoz, M.; Gelli, S.; Longo, E.; Margaroli, F.; Marzocchi, B.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bartosik, N.; Bellan, R.; Biino, C.; Cartiglia, N.; Cenna, F.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Monteno, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Ruspa, M.; Sacchi, R.; Shchelina, K.; Sola, V.; Solano, A.; Staiano, A.; Traczyk, P.; Belforte, S.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Zanetti, A.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Lee, J.; Lee, S.; Lee, S. W.; Moon, C. S.; Oh, Y. D.; Sekmen, S.; Son, D. C.; Yang, Y. C.; Lee, A.; Kim, H.; Moon, D. H.; Oh, G.; Brochero Cifuentes, J. A.; Goh, J.; Kim, T. J.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Ha, S.; Hong, B.; Jo, Y.; Kim, Y.; Lee, K.; Lee, K. S.; Lee, S.; Lim, J.; Park, S. K.; Roh, Y.; Almond, J.; Kim, J.; Kim, J. S.; Lee, H.; Lee, K.; Nam, K.; Oh, S. B.; Radburn-Smith, B. C.; Seo, S. h.; Yang, U. K.; Yoo, H. D.; Yu, G. B.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Choi, Y.; Hwang, C.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Zolkapli, Z.; Reyes-Almanza, R.; Ramirez-Sanchez, G.; Duran-Osuna, M. C.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Rabadan-Trejo, R. I.; Lopez-Fernandez, R.; Mejia Guisao, J.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Oropeza Barrera, C.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Uribe Estrada, C.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Saddique, A.; Shah, M. A.; Shoaib, M.; Waqas, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Szleper, M.; Zalewski, P.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Pyskir, A.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Di Francesco, A.; Faccioli, P.; Galinhas, B.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nemallapudi, M. V.; Seixas, J.; Strong, G.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Palichik, V.; Perelygin, V.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Voytishin, N.; Zarubin, A.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Stepennov, A.; Toms, M.; Vlasov, E.; Zhokin, A.; Aushev, T.; Bylinkin, A.; Chistov, R.; Danilov, M.; Parygin, P.; Philippov, D.; Polikarpov, S.; Tarkovskii, E.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Terkulov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Blinov, V.; Skovpen, Y.; Shtol, D.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Elumakhov, D.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Mandrik, P.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Devetak, D.; Dordevic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Barrio Luna, M.; Cerrada, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Moran, D.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; Álvarez Fernández, A.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Cuevas, J.; Erice, C.; Fernandez Menendez, J.; Gonzalez Caballero, I.; González Fernández, J. R.; Palencia Cortezon, E.; Sanchez Cruz, S.; Vischia, P.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Chazin Quero, B.; Curras, E.; Duarte Campderros, J.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Martinez Rivero, C.; Martinez Ruiz del Arbol, P.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Akgun, B.; Auffray, E.; Baillon, P.; Ball, A. H.; Barney, D.; Bianco, M.; Bloch, P.; Bocci, A.; Botta, C.; Camporesi, T.; Castello, R.; Cepeda, M.; Cerminara, G.; Chapon, E.; Chen, Y.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Roeck, A.; Deelen, N.; Dobson, M.; du Pree, T.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Everaerts, P.; Fallavollita, F.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gilbert, A.; Gill, K.; Glege, F.; Gulhan, D.; Harris, P.; Hegeman, J.; Innocente, V.; Jafari, A.; Janot, P.; Karacheban, O.; Kieseler, J.; Knünz, V.; Kornmayer, A.; Kortelainen, M. J.; Krammer, M.; Lange, C.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Malgeri, L.; Mannelli, M.; Martelli, A.; Meijers, F.; Merlin, J. A.; Mersi, S.; Meschi, E.; Milenovic, P.; Moortgat, F.; Mulders, M.; Neugebauer, H.; Ngadiuba, J.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Rabady, D.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Seidel, M.; Selvaggi, M.; Sharma, A.; Silva, P.; Sphicas, P.; Stakia, A.; Steggemann, J.; Stoye, M.; Tosi, M.; Treille, D.; Triossi, A.; Tsirou, A.; Veckalns, V.; Verweij, M.; Zeuner, W. D.; Bertl, W.; Caminada, L.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Wiederkehr, S. A.; Backhaus, M.; Bäni, L.; Berger, P.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dorfer, C.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Klijnsma, T.; Lustermann, W.; Mangano, B.; Marionneau, M.; Meinhard, M. T.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrin, G.; Perrozzi, L.; Quittnat, M.; Reichmann, M.; Sanz Becerra, D. A.; Schönenberger, M.; Shchutska, L.; Tavolaro, V. R.; Theofilatos, K.; Vesterbacka Olsson, M. L.; Wallny, R.; Zhu, D. H.; Aarrestad, T. K.; Amsler, C.; Canelli, M. F.; De Cosa, A.; Del Burgo, R.; Donato, S.; Galloni, C.; Hreus, T.; Kilminster, B.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Schweiger, K.; Seitz, C.; Takahashi, Y.; Zucchetta, A.; Candelise, V.; Doan, T. H.; Jain, Sh.; Khurana, R.; Kuo, C. M.; Lin, W.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chao, Y.; Chen, K. F.; Chen, P. H.; Fiori, F.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Paganis, E.; Psallidas, A.; Steen, A.; Tsai, J. f.; Asavapibhop, B.; Kovitanggoon, K.; Singh, G.; Srimanobhas, N.; Boran, F.; Cerci, S.; Damarseckin, S.; Demiroglu, Z. S.; Dozen, C.; Dumanoglu, I.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kayis Topaksu, A.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Sunar Cerci, D.; Tali, B.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Bilin, B.; Karapinar, G.; Ocalan, K.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Tekten, S.; Yetkin, E. A.; Agaras, M. N.; Atay, S.; Cakir, A.; Cankocak, K.; Grynyov, B.; Levchuk, L.; Ball, F.; Beck, L.; Brooke, J. J.; Burns, D.; Clement, E.; Cussans, D.; Davignon, O.; Flacher, H.; Goldstein, J.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Newbold, D. M.; Paramesvaran, S.; Sakuma, T.; Seif El Nasr-storey, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Auzinger, G.; Bainbridge, R.; Borg, J.; Breeze, S.; Buchmuller, O.; Bundock, A.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Di Maria, R.; Elwood, A.; Haddad, Y.; Hall, G.; Iles, G.; James, T.; Lane, R.; Laner, C.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mastrolorenzo, L.; Matsushita, T.; Nash, J.; Nikitenko, A.; Palladino, V.; Pesaresi, M.; Raymond, D. M.; Richards, A.; Rose, A.; Scott, E.; Seez, C.; Shtipliyski, A.; Summers, S.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Wardle, N.; Winterbottom, D.; Wright, J.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Zahid, S.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Smith, C.; Bartek, R.; Dominguez, A.; Buccilli, A.; Cooper, S. I.; Henderson, C.; Rumerio, P.; West, C.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Benelli, G.; Cutts, D.; Garabedian, A.; Hadley, M.; Hakala, J.; Heintz, U.; Hogan, J. M.; Kwok, K. H. M.; Laird, E.; Landsberg, G.; Lee, J.; Mao, Z.; Narain, M.; Pazzini, J.; Piperov, S.; Sagir, S.; Syarif, R.; Yu, D.; Band, R.; Brainerd, C.; Burns, D.; Calderon De La Barca Sanchez, M.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Flores, C.; Funk, G.; Gardner, M.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Shalhout, S.; Shi, M.; Smith, J.; Stolp, D.; Tos, K.; Tripathi, M.; Wang, Z.; Bachtis, M.; Bravo, C.; Cousins, R.; Dasgupta, A.; Florent, A.; Hauser, J.; Ignatenko, M.; Mccoll, N.; Regnard, S.; Saltzberg, D.; Schnaible, C.; Valuev, V.; Bouvier, E.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Ghiasi Shirazi, S. M. A.; Hanson, G.; Heilman, J.; Kennedy, E.; Lacroix, F.; Long, O. R.; Olmedo Negrete, M.; Paneva, M. I.; Si, W.; Wang, L.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cittolin, S.; Derdzinski, M.; Gerosa, R.; Gilbert, D.; Hashemi, B.; Holzner, A.; Klein, D.; Kole, G.; Krutelyov, V.; Letts, J.; Macneill, I.; Masciovecchio, M.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Wood, J.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Amin, N.; Bhandari, R.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Franco Sevilla, M.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Heller, R.; Incandela, J.; Mullin, S. D.; Ovcharova, A.; Qu, H.; Richman, J.; Stuart, D.; Suarez, I.; Yoo, J.; Anderson, D.; Bendavid, J.; Bornheim, A.; Lawhorn, J. M.; Newman, H. B.; Nguyen, T.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhang, Z.; Zhu, R. Y.; Andrews, M. B.; Ferguson, T.; Mudholkar, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Weinberg, M.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Leontsinis, S.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; Mcdermott, K.; Mirman, N.; Patterson, J. R.; Quach, D.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Tan, S. M.; Tao, Z.; Thom, J.; Tucker, J.; Wittich, P.; Zientek, M.; Abdullin, S.; Albrow, M.; Alyari, M.; Apollinari, G.; Apresyan, A.; Apyan, A.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Canepa, A.; Cerati, G. B.; Cheung, H. W. K.; Chlebana, F.; Cremonesi, M.; Duarte, J.; Elvira, V. D.; Freeman, J.; Gecse, Z.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Lincoln, D.; Lipton, R.; Liu, M.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Magini, N.; Marraffino, J. M.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Ristori, L.; Schneider, B.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strait, J.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Brinkerhoff, A.; Carnes, A.; Carver, M.; Curry, D.; Field, R. D.; Furic, I. K.; Gleyzer, S. V.; Joshi, B. M.; Konigsberg, J.; Korytov, A.; Kotov, K.; Ma, P.; Matchev, K.; Mei, H.; Mitselmakher, G.; Rank, D.; Shi, K.; Sperka, D.; Terentyev, N.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Joshi, Y. R.; Linn, S.; Markowitz, P.; Rodriguez, J. L.; Ackert, A.; Adams, T.; Askew, A.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Kolberg, T.; Martinez, G.; Perry, T.; Prosper, H.; Saha, A.; Santra, A.; Sharma, V.; Yohay, R.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Cavanaugh, R.; Chen, X.; Evdokimov, O.; Gerber, C. E.; Hangal, D. A.; Hofman, D. J.; Jung, K.; Kamin, J.; Sandoval Gonzalez, I. D.; Tonjes, M. B.; Trauger, H.; Varelas, N.; Wang, H.; Wu, Z.; Zhang, J.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Blumenfeld, B.; Cocoros, A.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Mantilla, C.; Roskes, J.; Sarica, U.; Swartz, M.; Xiao, M.; You, C.; Al-bataineh, A.; Baringer, P.; Bean, A.; Boren, S.; Bowen, J.; Castle, J.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Mcbrayer, W.; Murray, M.; Royon, C.; Sanders, S.; Schmitz, E.; Tapia Takaki, J. D.; Wang, Q.; Ivanov, A.; Kaadze, K.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Feng, Y.; Ferraioli, C.; Hadley, N. J.; Jabeen, S.; Jeng, G. Y.; Kellogg, R. G.; Kunkle, J.; Mignerey, A. C.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Tonwar, S. C.; Abercrombie, D.; Allen, B.; Azzolini, V.; Barbieri, R.; Baty, A.; Bi, R.; Brandt, S.; Busza, W.; Cali, I. A.; D'Alfonso, M.; Demiragli, Z.; Gomez Ceballos, G.; Goncharov, M.; Hsu, D.; Hu, M.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Maier, B.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Tatar, K.; Velicanu, D.; Wang, J.; Wang, T. W.; Wyslouch, B.; Benvenuti, A. C.; Chatterjee, R. M.; Evans, A.; Hansen, P.; Hiltbrand, J.; Kalafut, S.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Turkewitz, J.; Wadud, M. A.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Claes, D. R.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Kravchenko, I.; Monroy, J.; Siado, J. E.; Snow, G. R.; Stieger, B.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Nguyen, D.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wood, D.; Bhattacharya, S.; Charaf, O.; Hahn, K. A.; Mucia, N.; Odell, N.; Pollack, B.; Schmitt, M. H.; Sung, K.; Trovato, M.; Velasco, M.; Dev, N.; Hildreth, M.; Hurtado Anampa, K.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Loukas, N.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Smith, G.; Taroni, S.; Wayne, M.; Wolf, M.; Woodard, A.; Alimena, J.; Antonelli, L.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Francis, B.; Hart, A.; Hill, C.; Ji, W.; Liu, B.; Luo, W.; Puigh, D.; Winer, B. L.; Wulsin, H. W.; Cooperstein, S.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Higginbotham, S.; Lange, D.; Luo, J.; Marlow, D.; Mei, K.; Ojalvo, I.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Tully, C.; Malik, S.; Norberg, S.; Barker, A.; Barnes, V. E.; Das, S.; Folgueras, S.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, A. W.; Khatiwada, A.; Miller, D. H.; Neumeister, N.; Peng, C. C.; Qiu, H.; Schulte, J. F.; Sun, J.; Wang, F.; Xie, W.; Cheng, T.; Parashar, N.; Stupak, J.; Adair, A.; Chen, Z.; Ecklund, K. M.; Freed, S.; Geurts, F. J. M.; Guilbaud, M.; Kilpatrick, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Roberts, J.; Rorie, J.; Shi, W.; Tu, Z.; Zabel, J.; Zhang, A.; Bodek, A.; de Barbaro, P.; Demina, R.; Duh, Y. t.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Tan, P.; Verzetti, M.; Ciesielski, R.; Goulianos, K.; Mesropian, C.; Agapitos, A.; Chou, J. P.; Gershtein, Y.; Gómez Espinosa, T. A.; Halkiadakis, E.; Heindl, M.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Kyriacou, S.; Lath, A.; Montalvo, R.; Nash, K.; Osherson, M.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Delannoy, A. G.; Foerster, M.; Heideman, J.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Castaneda Hernandez, A.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Kamon, T.; Mueller, R.; Pakhotin, Y.; Patel, R.; Perloff, A.; Perniè, L.; Rathjens, D.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Damgov, J.; De Guio, F.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Peltola, T.; Undleeb, S.; Volobouev, I.; Wang, Z.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Melo, A.; Ni, H.; Padeken, K.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Barria, P.; Cox, B.; Hirosky, R.; Joyce, M.; Ledovskoy, A.; Li, H.; Neu, C.; Sinthuprasith, T.; Wang, Y.; Wolfe, E.; Xia, F.; Harr, R.; Karchin, P. E.; Poudyal, N.; Sturdy, J.; Thapa, P.; Zaleski, S.; Brodski, M.; Buchanan, J.; Caillol, C.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Hussain, U.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Polese, G.; Ruggles, T.; Savin, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.; CMS Collaboration

    2018-02-01

    An inclusive search for the standard model Higgs boson (H ) produced with large transverse momentum (pT ) and decaying to a bottom quark-antiquark pair (b b ¯ ) is performed using a data set of p p collisions at √{s }=13 TeV collected with the CMS experiment at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb-1 . A highly Lorentz-boosted Higgs boson decaying to b b ¯ is reconstructed as a single, large radius jet, and it is identified using jet substructure and dedicated b tagging techniques. The method is validated with Z →b b ¯ decays. The Z →b b ¯ process is observed for the first time in the single-jet topology with a local significance of 5.1 standard deviations (5.8 expected). For a Higgs boson mass of 125 GeV, an excess of events above the expected background is observed (expected) with a local significance of 1.5 (0.7) standard deviations. The measured cross section times branching fraction for production via gluon fusion of H →b b ¯ with reconstructed pT>450 GeV and in the pseudorapidity range -2.5 <η <2.5 is 74 ±48 (stat)-10+17(syst) fb , which is consistent within uncertainties with the standard model prediction.

  1. Spectral relative standard deviation: a practical benchmark in metabolomics.

    PubMed

    Parsons, Helen M; Ekman, Drew R; Collette, Timothy W; Viant, Mark R

    2009-03-01

    Metabolomics datasets, by definition, comprise of measurements of large numbers of metabolites. Both technical (analytical) and biological factors will induce variation within these measurements that is not consistent across all metabolites. Consequently, criteria are required to assess the reproducibility of metabolomics datasets that are derived from all the detected metabolites. Here we calculate spectrum-wide relative standard deviations (RSDs; also termed coefficient of variation, CV) for ten metabolomics datasets, spanning a variety of sample types from mammals, fish, invertebrates and a cell line, and display them succinctly as boxplots. We demonstrate multiple applications of spectral RSDs for characterising technical as well as inter-individual biological variation: for optimising metabolite extractions, comparing analytical techniques, investigating matrix effects, and comparing biofluids and tissue extracts from single and multiple species for optimising experimental design. Technical variation within metabolomics datasets, recorded using one- and two-dimensional NMR and mass spectrometry, ranges from 1.6 to 20.6% (reported as the median spectral RSD). Inter-individual biological variation is typically larger, ranging from as low as 7.2% for tissue extracts from laboratory-housed rats to 58.4% for fish plasma. In addition, for some of the datasets we confirm that the spectral RSD values are largely invariant across different spectral processing methods, such as baseline correction, normalisation and binning resolution. In conclusion, we propose spectral RSDs and their median values contained herein as practical benchmarks for metabolomics studies.

  2. Thrust vector control of upper stage with a gimbaled thruster during orbit transfer

    NASA Astrophysics Data System (ADS)

    Wang, Zhaohui; Jia, Yinghong; Jin, Lei; Duan, Jiajia

    2016-10-01

    In launching Multi-Satellite with One-Vehicle, the main thruster provided by the upper stage is mounted on a two-axis gimbal. During orbit transfer, the thrust vector of this gimbaled thruster (GT) should theoretically pass through the mass center of the upper stage and align with the command direction to provide orbit transfer impetus. However, it is hard to be implemented from the viewpoint of the engineering mission. The deviations of the thrust vector from the command direction would result in large velocity errors. Moreover, the deviations of the thrust vector from the upper stage mass center would produce large disturbance torques. This paper discusses the thrust vector control (TVC) of the upper stage during its orbit transfer. Firstly, the accurate nonlinear coupled kinematic and dynamic equations of the upper stage body, the two-axis gimbal and the GT are derived by taking the upper stage as a multi-body system. Then, a thrust vector control system consisting of the special attitude control of the upper stage and the gimbal rotation of the gimbaled thruster is proposed. The special attitude control defined by the desired attitude that draws the thrust vector to align with the command direction when the gimbal control makes the thrust vector passes through the upper stage mass center. Finally, the validity of the proposed method is verified through numerical simulations.

  3. Inclusive Search for a Highly Boosted Higgs Boson Decaying to a Bottom Quark-Antiquark Pair.

    PubMed

    Sirunyan, A M; Tumasyan, A; Adam, W; Ambrogi, F; Asilar, E; Bergauer, T; Brandstetter, J; Brondolin, E; Dragicevic, M; Erö, J; Flechl, M; Friedl, M; Frühwirth, R; Ghete, V M; Grossmann, J; Hrubec, J; Jeitler, M; König, A; Krammer, N; Krätschmer, I; Liko, D; Madlener, T; Mikulec, I; Pree, E; Rad, N; Rohringer, H; Schieck, J; Schöfbeck, R; Spanring, M; Spitzbart, D; Waltenberger, W; Wittmann, J; Wulz, C-E; Zarucki, M; Chekhovsky, V; Dydyshka, Y; Suarez Gonzalez, J; De Wolf, E A; Di Croce, D; Janssen, X; Lauwers, J; Van De Klundert, M; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Abu Zeid, S; Blekman, F; D'Hondt, J; De Bruyn, I; De Clercq, J; Deroover, K; Flouris, G; Lontkovskyi, D; Lowette, S; Moortgat, S; Moreels, L; Python, Q; Skovpen, K; Tavernier, S; Van Doninck, W; Van Mulders, P; Van Parijs, I; Beghin, D; Brun, H; Clerbaux, B; De Lentdecker, G; Delannoy, H; Dorney, B; Fasanella, G; Favart, L; Goldouzian, R; Grebenyuk, A; Karapostoli, G; Lenzi, T; Luetic, J; Maerschalk, T; Marinov, A; Randle-Conde, A; Seva, T; Starling, E; Vander Velde, C; Vanlaer, P; Vannerom, D; Yonamine, R; Zenoni, F; Zhang, F; Cimmino, A; Cornelis, T; Dobur, D; Fagot, A; Gul, M; Khvastunov, I; Poyraz, D; Roskas, C; Salva, S; Tytgat, M; Verbeke, W; Zaganidis, N; Bakhshiansohi, H; Bondu, O; Brochet, S; Bruno, G; Caputo, C; Caudron, A; David, P; De Visscher, S; Delaere, C; Delcourt, M; Francois, B; Giammanco, A; Komm, M; Krintiras, G; Lemaitre, V; Magitteri, A; Mertens, A; Musich, M; Piotrzkowski, K; Quertenmont, L; Saggio, A; Vidal Marono, M; Wertz, S; Zobec, J; Beliy, N; Aldá Júnior, W L; Alves, F L; Alves, G A; Brito, L; Correa Martins Junior, M; Hensel, C; Moraes, A; Pol, M E; Rebello Teles, P; Belchior Batista Das Chagas, E; Carvalho, W; Chinellato, J; Coelho, E; Da Costa, E M; Da Silveira, G G; De Jesus Damiao, D; Fonseca De Souza, S; Huertas Guativa, L M; Malbouisson, H; Melo De Almeida, M; Mora Herrera, C; Mundim, L; Nogima, H; Sanchez Rosas, L J; Santoro, A; Sznajder, A; Thiel, M; Tonelli Manganote, E J; Torres Da Silva De Araujo, F; Vilela Pereira, A; Ahuja, S; Bernardes, C A; Fernandez Perez Tomei, T R; Gregores, E M; Mercadante, P G; Novaes, S F; Padula, Sandra S; Romero Abad, D; Ruiz Vargas, J C; Aleksandrov, A; Hadjiiska, R; Iaydjiev, P; Misheva, M; Rodozov, M; Shopova, M; Sultanov, G; Dimitrov, A; Glushkov, I; Litov, L; Pavlov, B; Petkov, P; Fang, W; Gao, X; Yuan, L; Ahmad, M; Bian, J G; Chen, G M; Chen, H S; Chen, M; Chen, Y; Jiang, C H; Leggat, D; Liao, H; Liu, Z; Romeo, F; Shaheen, S M; Spiezia, A; Tao, J; Wang, C; Wang, Z; Yazgan, E; Zhang, H; Zhang, S; Zhao, J; Ban, Y; Chen, G; Li, Q; Liu, S; Mao, Y; Qian, S J; Wang, D; Xu, Z; Avila, C; Cabrera, A; Chaparro Sierra, L F; Florez, C; González Hernández, C F; Ruiz Alvarez, J D; Courbon, B; Godinovic, N; Lelas, D; Puljak, I; Ribeiro Cipriano, P M; Sculac, T; Antunovic, Z; Kovac, M; Brigljevic, V; Ferencek, D; Kadija, K; Mesic, B; Starodumov, A; Susa, T; Ather, M W; Attikis, A; Mavromanolakis, G; Mousa, J; Nicolaou, C; Ptochos, F; Razis, P A; Rykaczewski, H; Finger, M; Finger, M; Carrera Jarrin, E; Assran, Y; Mahmoud, M A; Mahrous, A; Dewanjee, R K; Kadastik, M; Perrini, L; Raidal, M; Tiko, A; Veelken, C; Eerola, P; Kirschenmann, H; Pekkanen, J; Voutilainen, M; Järvinen, T; Karimäki, V; Kinnunen, R; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Tuominen, E; Tuominiemi, J; Talvitie, J; Tuuva, T; Besancon, M; Couderc, F; Dejardin, M; Denegri, D; Faure, J L; Ferri, F; Ganjour, S; Ghosh, S; Givernaud, A; Gras, P; Hamel de Monchenault, G; Jarry, P; Kucher, I; Leloup, C; Locci, E; Machet, M; Malcles, J; Negro, G; Rander, J; Rosowsky, A; Sahin, M Ö; Titov, M; Abdulsalam, A; Amendola, C; Antropov, I; Baffioni, S; Beaudette, F; Busson, P; Cadamuro, L; Charlot, C; Granier de Cassagnac, R; Jo, M; Lisniak, S; Lobanov, A; Martin Blanco, J; Nguyen, M; Ochando, C; Ortona, G; Paganini, P; Pigard, P; Salerno, R; Sauvan, J B; Sirois, Y; Stahl Leiton, A G; Strebler, T; Yilmaz, Y; Zabi, A; Zghiche, A; Agram, J-L; Andrea, J; Bloch, D; Brom, J-M; Buttignol, M; Chabert, E C; Chanon, N; Collard, C; Conte, E; Coubez, X; Fontaine, J-C; Gelé, D; Goerlach, U; Jansová, M; Le Bihan, A-C; Tonon, N; Van Hove, P; Gadrat, S; Beauceron, S; Bernet, C; Boudoul, G; Chierici, R; Contardo, D; Depasse, P; El Mamouni, H; Fay, J; Finco, L; Gascon, S; Gouzevitch, M; Grenier, G; Ille, B; Lagarde, F; Laktineh, I B; Lethuillier, M; Mirabito, L; Pequegnot, A L; Perries, S; Popov, A; Sordini, V; Vander Donckt, M; Viret, S; Khvedelidze, A; Tsamalaidze, Z; Autermann, C; Feld, L; Kiesel, M K; Klein, K; Lipinski, M; Preuten, M; Schomakers, C; Schulz, J; Verlage, T; Zhukov, V; Albert, A; Dietz-Laursonn, E; Duchardt, D; Endres, M; Erdmann, M; Erdweg, S; Esch, T; Fischer, R; Güth, A; Hamer, M; Hebbeker, T; Heidemann, C; Hoepfner, K; Knutzen, S; Merschmeyer, M; Meyer, A; Millet, P; Mukherjee, S; Pook, T; Radziej, M; Reithler, H; Rieger, M; Scheuch, F; Teyssier, D; Thüer, S; Flügge, G; Kargoll, B; Kress, T; Künsken, A; Lingemann, J; Müller, T; Nehrkorn, A; Nowack, A; Pistone, C; Pooth, O; Stahl, A; Aldaya Martin, M; Arndt, T; Asawatangtrakuldee, C; Beernaert, K; Behnke, O; Behrens, U; Bermúdez Martínez, A; Bin Anuar, A A; Borras, K; Botta, V; Campbell, A; Connor, P; Contreras-Campana, C; Costanza, F; Diez Pardos, C; Eckerlin, G; Eckstein, D; Eichhorn, T; Eren, E; Gallo, E; Garay Garcia, J; Geiser, A; Gizhko, A; Grados Luyando, J M; Grohsjean, A; Gunnellini, P; Guthoff, M; Harb, A; Hauk, J; Hempel, M; Jung, H; Kalogeropoulos, A; Kasemann, M; Keaveney, J; Kleinwort, C; Korol, I; Krücker, D; Lange, W; Lelek, A; Lenz, T; Leonard, J; Lipka, K; Lohmann, W; Mankel, R; Melzer-Pellmann, I-A; Meyer, A B; Mittag, G; Mnich, J; Mussgiller, A; Ntomari, E; Pitzl, D; Raspereza, A; Roland, B; Savitskyi, M; Saxena, P; Shevchenko, R; Spannagel, S; Stefaniuk, N; Van Onsem, G P; Walsh, R; Wen, Y; Wichmann, K; Wissing, C; Zenaiev, O; Aggleton, R; Bein, S; Blobel, V; Centis Vignali, M; Dreyer, T; Garutti, E; Gonzalez, D; Haller, J; Hinzmann, A; Hoffmann, M; Karavdina, A; Klanner, R; Kogler, R; Kovalchuk, N; Kurz, S; Lapsien, T; Marchesini, I; Marconi, D; Meyer, M; Niedziela, M; Nowatschin, D; Pantaleo, F; Peiffer, T; Perieanu, A; Scharf, C; Schleper, P; Schmidt, A; Schumann, S; Schwandt, J; Sonneveld, J; Stadie, H; Steinbrück, G; Stober, F M; Stöver, M; Tholen, H; Troendle, D; Usai, E; Vanelderen, L; Vanhoefer, A; Vormwald, B; Akbiyik, M; Barth, C; Baur, S; Butz, E; Caspart, R; Chwalek, T; Colombo, F; De Boer, W; Dierlamm, A; Freund, B; Friese, R; Giffels, M; Haitz, D; Harrendorf, M A; Hartmann, F; Heindl, S M; Husemann, U; Kassel, F; Kudella, S; Mildner, H; Mozer, M U; Müller, Th; Plagge, M; Quast, G; Rabbertz, K; Schröder, M; Shvetsov, I; Sieber, G; Simonis, H J; Ulrich, R; Wayand, S; Weber, M; Weiler, T; Williamson, S; Wöhrmann, C; Wolf, R; Anagnostou, G; Daskalakis, G; Geralis, T; Giakoumopoulou, V A; Kyriakis, A; Loukas, D; Topsis-Giotis, I; Karathanasis, G; Kesisoglou, S; Panagiotou, A; Saoulidou, N; Kousouris, K; Evangelou, I; Foudas, C; Kokkas, P; Mallios, S; Manthos, N; Papadopoulos, I; Paradas, E; Strologas, J; Triantis, F A; Csanad, M; Filipovic, N; Pasztor, G; Surányi, O; Veres, G I; Bencze, G; Hajdu, C; Horvath, D; Hunyadi, Á; Sikler, F; Veszpremi, V; Zsigmond, A J; Beni, N; Czellar, S; Karancsi, J; Makovec, A; Molnar, J; Szillasi, Z; Bartók, M; Raics, P; Trocsanyi, Z L; Ujvari, B; Choudhury, S; Komaragiri, J R; Bahinipati, S; Bhowmik, S; Mal, P; Mandal, K; Nayak, A; Sahoo, D K; Sahoo, N; Swain, S K; Bansal, S; Beri, S B; Bhatnagar, V; Chawla, R; Dhingra, N; Kalsi, A K; Kaur, A; Kaur, M; Kaur, S; Kumar, R; Kumari, P; Mehta, A; Singh, J B; Walia, G; Kumar, Ashok; Shah, Aashaq; Bhardwaj, A; Chauhan, S; Choudhary, B C; Garg, R B; Keshri, S; Kumar, A; Malhotra, S; Naimuddin, M; Ranjan, K; Sharma, R; Bhardwaj, R; Bhattacharya, R; Bhattacharya, S; Bhawandeep, U; Dey, S; Dutt, S; Dutta, S; Ghosh, S; Majumdar, N; Modak, A; Mondal, K; Mukhopadhyay, S; Nandan, S; Purohit, A; Roy, A; Roy, D; Roy Chowdhury, S; Sarkar, S; Sharan, M; Thakur, S; Behera, P K; Chudasama, R; Dutta, D; Jha, V; Kumar, V; Mohanty, A K; Netrakanti, P K; Pant, L M; Shukla, P; Topkar, A; Aziz, T; Dugad, S; Mahakud, B; Mitra, S; Mohanty, G B; Sur, N; Sutar, B; Banerjee, S; Bhattacharya, S; Chatterjee, S; Das, P; Guchait, M; Jain, Sa; Kumar, S; Maity, M; Majumder, G; Mazumdar, K; Sarkar, T; Wickramage, N; Chauhan, S; Dube, S; Hegde, V; Kapoor, A; Kothekar, K; Pandey, S; Rane, A; Sharma, S; Chenarani, S; Eskandari Tadavani, E; Etesami, S M; Khakzad, M; Mohammadi Najafabadi, M; Naseri, M; Paktinat Mehdiabadi, S; Rezaei Hosseinabadi, F; Safarzadeh, B; Zeinali, M; Felcini, M; Grunewald, M; Abbrescia, M; Calabria, C; Colaleo, A; Creanza, D; Cristella, L; De Filippis, N; De Palma, M; Errico, F; Fiore, L; Iaselli, G; Lezki, S; Maggi, G; Maggi, M; Miniello, G; My, S; Nuzzo, S; Pompili, A; Pugliese, G; Radogna, R; Ranieri, A; Selvaggi, G; Sharma, A; Silvestris, L; Venditti, R; Verwilligen, P; Abbiendi, G; Battilana, C; Bonacorsi, D; Borgonovi, L; Braibant-Giacomelli, S; Campanini, R; Capiluppi, P; Castro, A; Cavallo, F R; Chhibra, S S; Codispoti, G; Cuffiani, M; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Grandi, C; Guiducci, L; Marcellini, S; Masetti, G; Montanari, A; Navarria, F L; Perrotta, A; Rossi, A M; Rovelli, T; Siroli, G P; Tosi, N; Albergo, S; Costa, S; Di Mattia, A; Giordano, F; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Chatterjee, K; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Lenzi, P; Meschini, M; Paoletti, S; Russo, L; Sguazzoni, G; Strom, D; Viliani, L; Benussi, L; Bianco, S; Fabbri, F; Piccolo, D; Primavera, F; Calvelli, V; Ferro, F; Robutti, E; Tosi, S; Benaglia, A; Brianza, L; Brivio, F; Ciriolo, V; Dinardo, M E; Fiorendi, S; Gennai, S; Ghezzi, A; Govoni, P; Malberti, M; Malvezzi, S; Manzoni, R A; Menasce, D; Moroni, L; Paganoni, M; Pauwels, K; Pedrini, D; Pigazzini, S; Ragazzi, S; Redaelli, N; Tabarelli de Fatis, T; Buontempo, S; Cavallo, N; Di Guida, S; Fabozzi, F; Fienga, F; Iorio, A O M; Khan, W A; Lista, L; Meola, S; Paolucci, P; Sciacca, C; Thyssen, F; Azzi, P; Bacchetta, N; Benato, L; Biasotto, M; Bisello, D; Boletti, A; Carlin, R; Carvalho Antunes De Oliveira, A; Checchia, P; Dall'Osso, M; De Castro Manzano, P; Dorigo, T; Gasparini, U; Gozzelino, A; Lacaprara, S; Lujan, P; Margoni, M; Meneguzzo, A T; Pozzobon, N; Ronchese, P; Rossin, R; Simonetto, F; Torassa, E; Ventura, S; Zanetti, M; Zotto, P; Braghieri, A; Magnani, A; Montagna, P; Ratti, S P; Re, V; Ressegotti, M; Riccardi, C; Salvini, P; Vai, I; Vitulo, P; Alunni Solestizi, L; Biasini, M; Bilei, G M; Cecchi, C; Ciangottini, D; Fanò, L; Lariccia, P; Leonardi, R; Manoni, E; Mantovani, G; Mariani, V; Menichelli, M; Rossi, A; Santocchia, A; Spiga, D; Androsov, K; Azzurri, P; Bagliesi, G; Boccali, T; Borrello, L; Castaldi, R; Ciocci, M A; Dell'Orso, R; Fedi, G; Giannini, L; Giassi, A; Grippo, M T; Ligabue, F; Lomtadze, T; Manca, E; Mandorli, G; Martini, L; Messineo, A; Palla, F; Rizzi, A; Savoy-Navarro, A; Spagnolo, P; Tenchini, R; Tonelli, G; Venturi, A; Verdini, P G; Barone, L; Cavallari, F; Cipriani, M; Daci, N; Del Re, D; Di Marco, E; Diemoz, M; Gelli, S; Longo, E; Margaroli, F; Marzocchi, B; Meridiani, P; Organtini, G; Paramatti, R; Preiato, F; Rahatlou, S; Rovelli, C; Santanastasio, F; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Bartosik, N; Bellan, R; Biino, C; Cartiglia, N; Cenna, F; Costa, M; Covarelli, R; Degano, A; Demaria, N; Kiani, B; Mariotti, C; Maselli, S; Migliore, E; Monaco, V; Monteil, E; Monteno, M; Obertino, M M; Pacher, L; Pastrone, N; Pelliccioni, M; Pinna Angioni, G L; Ravera, F; Romero, A; Ruspa, M; Sacchi, R; Shchelina, K; Sola, V; Solano, A; Staiano, A; Traczyk, P; Belforte, S; Casarsa, M; Cossutti, F; Della Ricca, G; Zanetti, A; Kim, D H; Kim, G N; Kim, M S; Lee, J; Lee, S; Lee, S W; Moon, C S; Oh, Y D; Sekmen, S; Son, D C; Yang, Y C; Lee, A; Kim, H; Moon, D H; Oh, G; Brochero Cifuentes, J A; Goh, J; Kim, T J; Cho, S; Choi, S; Go, Y; Gyun, D; Ha, S; Hong, B; Jo, Y; Kim, Y; Lee, K; Lee, K S; Lee, S; Lim, J; Park, S K; Roh, Y; Almond, J; Kim, J; Kim, J S; Lee, H; Lee, K; Nam, K; Oh, S B; Radburn-Smith, B C; Seo, S H; Yang, U K; Yoo, H D; Yu, G B; Choi, M; Kim, H; Kim, J H; Lee, J S H; Park, I C; Choi, Y; Hwang, C; Lee, J; Yu, I; Dudenas, V; Juodagalvis, A; Vaitkus, J; Ahmed, I; Ibrahim, Z A; Md Ali, M A B; Mohamad Idris, F; Wan Abdullah, W A T; Yusli, M N; Zolkapli, Z; Reyes-Almanza, R; Ramirez-Sanchez, G; Duran-Osuna, M C; Castilla-Valdez, H; De La Cruz-Burelo, E; Heredia-De La Cruz, I; Rabadan-Trejo, R I; Lopez-Fernandez, R; Mejia Guisao, J; Sanchez-Hernandez, A; Carrillo Moreno, S; Oropeza Barrera, C; Vazquez Valencia, F; Pedraza, I; Salazar Ibarguen, H A; Uribe Estrada, C; Morelos Pineda, A; Krofcheck, D; Butler, P H; Ahmad, A; Ahmad, M; Hassan, Q; Hoorani, H R; Saddique, A; Shah, M A; Shoaib, M; Waqas, M; Bialkowska, H; Bluj, M; Boimska, B; Frueboes, T; Górski, M; Kazana, M; Nawrocki, K; Szleper, M; Zalewski, P; Bunkowski, K; Byszuk, A; Doroba, K; Kalinowski, A; Konecki, M; Krolikowski, J; Misiura, M; Olszewski, M; Pyskir, A; Walczak, M; Bargassa, P; Beirão Da Cruz E Silva, C; Di Francesco, A; Faccioli, P; Galinhas, B; Gallinaro, M; Hollar, J; Leonardo, N; Lloret Iglesias, L; Nemallapudi, M V; Seixas, J; Strong, G; Toldaiev, O; Vadruccio, D; Varela, J; Afanasiev, S; Bunin, P; Gavrilenko, M; Golutvin, I; Gorbunov, I; Kamenev, A; Karjavin, V; Lanev, A; Malakhov, A; Matveev, V; Palichik, V; Perelygin, V; Shmatov, S; Shulha, S; Skatchkov, N; Smirnov, V; Voytishin, N; Zarubin, A; Ivanov, Y; Kim, V; Kuznetsova, E; Levchenko, P; Murzin, V; Oreshkin, V; Smirnov, I; Sulimov, V; Uvarov, L; Vavilov, S; Vorobyev, A; Andreev, Yu; Dermenev, A; Gninenko, S; Golubev, N; Karneyeu, A; Kirsanov, M; Krasnikov, N; Pashenkov, A; Tlisov, D; Toropin, A; Epshteyn, V; Gavrilov, V; Lychkovskaya, N; Popov, V; Pozdnyakov, I; Safronov, G; Spiridonov, A; Stepennov, A; Toms, M; Vlasov, E; Zhokin, A; Aushev, T; Bylinkin, A; Chistov, R; Danilov, M; Parygin, P; Philippov, D; Polikarpov, S; Tarkovskii, E; Andreev, V; Azarkin, M; Dremin, I; Kirakosyan, M; Terkulov, A; Baskakov, A; Belyaev, A; Boos, E; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Klyukhin, V; Kodolova, O; Lokhtin, I; Miagkov, I; Obraztsov, S; Petrushanko, S; Savrin, V; Snigirev, A; Blinov, V; Skovpen, Y; Shtol, D; Azhgirey, I; Bayshev, I; Bitioukov, S; Elumakhov, D; Kachanov, V; Kalinin, A; Konstantinov, D; Mandrik, P; Petrov, V; Ryutin, R; Sobol, A; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Cirkovic, P; Devetak, D; Dordevic, M; Milosevic, J; Rekovic, V; Alcaraz Maestre, J; Barrio Luna, M; Cerrada, M; Colino, N; De La Cruz, B; Delgado Peris, A; Escalante Del Valle, A; Fernandez Bedoya, C; Fernández Ramos, J P; Flix, J; Fouz, M C; Garcia-Abia, P; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Moran, D; Pérez-Calero Yzquierdo, A; Puerta Pelayo, J; Quintario Olmeda, A; Redondo, I; Romero, L; Soares, M S; Álvarez Fernández, A; Albajar, C; de Trocóniz, J F; Missiroli, M; Cuevas, J; Erice, C; Fernandez Menendez, J; Gonzalez Caballero, I; González Fernández, J R; Palencia Cortezon, E; Sanchez Cruz, S; Vischia, P; Vizan Garcia, J M; Cabrillo, I J; Calderon, A; Chazin Quero, B; Curras, E; Duarte Campderros, J; Fernandez, M; Garcia-Ferrero, J; Gomez, G; Lopez Virto, A; Marco, J; Martinez Rivero, C; Martinez Ruiz Del Arbol, P; Matorras, F; Piedra Gomez, J; Rodrigo, T; Ruiz-Jimeno, A; Scodellaro, L; Trevisani, N; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Akgun, B; Auffray, E; Baillon, P; Ball, A H; Barney, D; Bianco, M; Bloch, P; Bocci, A; Botta, C; Camporesi, T; Castello, R; Cepeda, M; Cerminara, G; Chapon, E; Chen, Y; d'Enterria, D; Dabrowski, A; Daponte, V; David, A; De Gruttola, M; De Roeck, A; Deelen, N; Dobson, M; du Pree, T; Dünser, M; Dupont, N; Elliott-Peisert, A; Everaerts, P; Fallavollita, F; Franzoni, G; Fulcher, J; Funk, W; Gigi, D; Gilbert, A; Gill, K; Glege, F; Gulhan, D; Harris, P; Hegeman, J; Innocente, V; Jafari, A; Janot, P; Karacheban, O; Kieseler, J; Knünz, V; Kornmayer, A; Kortelainen, M J; Krammer, M; Lange, C; Lecoq, P; Lourenço, C; Lucchini, M T; Malgeri, L; Mannelli, M; Martelli, A; Meijers, F; Merlin, J A; Mersi, S; Meschi, E; Milenovic, P; Moortgat, F; Mulders, M; Neugebauer, H; Ngadiuba, J; Orfanelli, S; Orsini, L; Pape, L; Perez, E; Peruzzi, M; Petrilli, A; Petrucciani, G; Pfeiffer, A; Pierini, M; Rabady, D; Racz, A; Reis, T; Rolandi, G; Rovere, M; Sakulin, H; Schäfer, C; Schwick, C; Seidel, M; Selvaggi, M; Sharma, A; Silva, P; Sphicas, P; Stakia, A; Steggemann, J; Stoye, M; Tosi, M; Treille, D; Triossi, A; Tsirou, A; Veckalns, V; Verweij, M; Zeuner, W D; Bertl, W; Caminada, L; Deiters, K; Erdmann, W; Horisberger, R; Ingram, Q; Kaestli, H C; Kotlinski, D; Langenegger, U; Rohe, T; Wiederkehr, S A; Backhaus, M; Bäni, L; Berger, P; Bianchini, L; Casal, B; Dissertori, G; Dittmar, M; Donegà, M; Dorfer, C; Grab, C; Heidegger, C; Hits, D; Hoss, J; Kasieczka, G; Klijnsma, T; Lustermann, W; Mangano, B; Marionneau, M; Meinhard, M T; Meister, D; Micheli, F; Musella, P; Nessi-Tedaldi, F; Pandolfi, F; Pata, J; Pauss, F; Perrin, G; Perrozzi, L; Quittnat, M; Reichmann, M; Sanz Becerra, D A; Schönenberger, M; Shchutska, L; Tavolaro, V R; Theofilatos, K; Vesterbacka Olsson, M L; Wallny, R; Zhu, D H; Aarrestad, T K; Amsler, C; Canelli, M F; De Cosa, A; Del Burgo, R; Donato, S; Galloni, C; Hreus, T; Kilminster, B; Pinna, D; Rauco, G; Robmann, P; Salerno, D; Schweiger, K; Seitz, C; Takahashi, Y; Zucchetta, A; Candelise, V; Doan, T H; Jain, Sh; Khurana, R; Kuo, C M; Lin, W; Pozdnyakov, A; Yu, S S; Kumar, Arun; Chang, P; Chao, Y; Chen, K F; Chen, P H; Fiori, F; Hou, W-S; Hsiung, Y; Liu, Y F; Lu, R-S; Paganis, E; Psallidas, A; Steen, A; Tsai, J F; Asavapibhop, B; Kovitanggoon, K; Singh, G; Srimanobhas, N; Boran, F; Cerci, S; Damarseckin, S; Demiroglu, Z S; Dozen, C; Dumanoglu, I; Girgis, S; Gokbulut, G; Guler, Y; Hos, I; Kangal, E E; Kara, O; Kayis Topaksu, A; Kiminsu, U; Oglakci, M; Onengut, G; Ozdemir, K; Sunar Cerci, D; Tali, B; Turkcapar, S; Zorbakir, I S; Zorbilmez, C; Bilin, B; Karapinar, G; Ocalan, K; Yalvac, M; Zeyrek, M; Gülmez, E; Kaya, M; Kaya, O; Tekten, S; Yetkin, E A; Agaras, M N; Atay, S; Cakir, A; Cankocak, K; Grynyov, B; Levchuk, L; Ball, F; Beck, L; Brooke, J J; Burns, D; Clement, E; Cussans, D; Davignon, O; Flacher, H; Goldstein, J; Heath, G P; Heath, H F; Jacob, J; Kreczko, L; Newbold, D M; Paramesvaran, S; Sakuma, T; Seif El Nasr-Storey, S; Smith, D; Smith, V J; Bell, K W; Belyaev, A; Brew, C; Brown, R M; Calligaris, L; Cieri, D; Cockerill, D J A; Coughlan, J A; Harder, K; Harper, S; Olaiya, E; Petyt, D; Shepherd-Themistocleous, C H; Thea, A; Tomalin, I R; Williams, T; Auzinger, G; Bainbridge, R; Borg, J; Breeze, S; Buchmuller, O; Bundock, A; Casasso, S; Citron, M; Colling, D; Corpe, L; Dauncey, P; Davies, G; De Wit, A; Della Negra, M; Di Maria, R; Elwood, A; Haddad, Y; Hall, G; Iles, G; James, T; Lane, R; Laner, C; Lyons, L; Magnan, A-M; Malik, S; Mastrolorenzo, L; Matsushita, T; Nash, J; Nikitenko, A; Palladino, V; Pesaresi, M; Raymond, D M; Richards, A; Rose, A; Scott, E; Seez, C; Shtipliyski, A; Summers, S; Tapper, A; Uchida, K; Vazquez Acosta, M; Virdee, T; Wardle, N; Winterbottom, D; Wright, J; Zenz, S C; Cole, J E; Hobson, P R; Khan, A; Kyberd, P; Reid, I D; Symonds, P; Teodorescu, L; Turner, M; Zahid, S; Borzou, A; Call, K; Dittmann, J; Hatakeyama, K; Liu, H; Pastika, N; Smith, C; Bartek, R; Dominguez, A; Buccilli, A; Cooper, S I; Henderson, C; Rumerio, P; West, C; Arcaro, D; Avetisyan, A; Bose, T; Gastler, D; Rankin, D; Richardson, C; Rohlf, J; Sulak, L; Zou, D; Benelli, G; Cutts, D; Garabedian, A; Hadley, M; Hakala, J; Heintz, U; Hogan, J M; Kwok, K H M; Laird, E; Landsberg, G; Lee, J; Mao, Z; Narain, M; Pazzini, J; Piperov, S; Sagir, S; Syarif, R; Yu, D; Band, R; Brainerd, C; Burns, D; Calderon De La Barca Sanchez, M; Chertok, M; Conway, J; Conway, R; Cox, P T; Erbacher, R; Flores, C; Funk, G; Gardner, M; Ko, W; Lander, R; Mclean, C; Mulhearn, M; Pellett, D; Pilot, J; Shalhout, S; Shi, M; Smith, J; Stolp, D; Tos, K; Tripathi, M; Wang, Z; Bachtis, M; Bravo, C; Cousins, R; Dasgupta, A; Florent, A; Hauser, J; Ignatenko, M; Mccoll, N; Regnard, S; Saltzberg, D; Schnaible, C; Valuev, V; Bouvier, E; Burt, K; Clare, R; Ellison, J; Gary, J W; Ghiasi Shirazi, S M A; Hanson, G; Heilman, J; Kennedy, E; Lacroix, F; Long, O R; Olmedo Negrete, M; Paneva, M I; Si, W; Wang, L; Wei, H; Wimpenny, S; Yates, B R; Branson, J G; Cittolin, S; Derdzinski, M; Gerosa, R; Gilbert, D; Hashemi, B; Holzner, A; Klein, D; Kole, G; Krutelyov, V; Letts, J; Macneill, I; Masciovecchio, M; Olivito, D; Padhi, S; Pieri, M; Sani, M; Sharma, V; Simon, S; Tadel, M; Vartak, A; Wasserbaech, S; Wood, J; Würthwein, F; Yagil, A; Zevi Della Porta, G; Amin, N; Bhandari, R; Bradmiller-Feld, J; Campagnari, C; Dishaw, A; Dutta, V; Franco Sevilla, M; George, C; Golf, F; Gouskos, L; Gran, J; Heller, R; Incandela, J; Mullin, S D; Ovcharova, A; Qu, H; Richman, J; Stuart, D; Suarez, I; Yoo, J; Anderson, D; Bendavid, J; Bornheim, A; Lawhorn, J M; Newman, H B; Nguyen, T; Pena, C; Spiropulu, M; Vlimant, J R; Xie, S; Zhang, Z; Zhu, R Y; Andrews, M B; Ferguson, T; Mudholkar, T; Paulini, M; Russ, J; Sun, M; Vogel, H; Vorobiev, I; Weinberg, M; Cumalat, J P; Ford, W T; Jensen, F; Johnson, A; Krohn, M; Leontsinis, S; Mulholland, T; Stenson, K; Wagner, S R; Alexander, J; Chaves, J; Chu, J; Dittmer, S; Mcdermott, K; Mirman, N; Patterson, J R; Quach, D; Rinkevicius, A; Ryd, A; Skinnari, L; Soffi, L; Tan, S M; Tao, Z; Thom, J; Tucker, J; Wittich, P; Zientek, M; Abdullin, S; Albrow, M; Alyari, M; Apollinari, G; Apresyan, A; Apyan, A; Banerjee, S; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Bolla, G; Burkett, K; Butler, J N; Canepa, A; Cerati, G B; Cheung, H W K; Chlebana, F; Cremonesi, M; Duarte, J; Elvira, V D; Freeman, J; Gecse, Z; Gottschalk, E; Gray, L; Green, D; Grünendahl, S; Gutsche, O; Harris, R M; Hasegawa, S; Hirschauer, J; Hu, Z; Jayatilaka, B; Jindariani, S; Johnson, M; Joshi, U; Klima, B; Kreis, B; Lammel, S; Lincoln, D; Lipton, R; Liu, M; Liu, T; Lopes De Sá, R; Lykken, J; Maeshima, K; Magini, N; Marraffino, J M; Mason, D; McBride, P; Merkel, P; Mrenna, S; Nahn, S; O'Dell, V; Pedro, K; Prokofyev, O; Rakness, G; Ristori, L; Schneider, B; Sexton-Kennedy, E; Soha, A; Spalding, W J; Spiegel, L; Stoynev, S; Strait, J; Strobbe, N; Taylor, L; Tkaczyk, S; Tran, N V; Uplegger, L; Vaandering, E W; Vernieri, C; Verzocchi, M; Vidal, R; Wang, M; Weber, H A; Whitbeck, A; Acosta, D; Avery, P; Bortignon, P; Bourilkov, D; Brinkerhoff, A; Carnes, A; Carver, M; Curry, D; Field, R D; Furic, I K; Gleyzer, S V; Joshi, B M; Konigsberg, J; Korytov, A; Kotov, K; Ma, P; Matchev, K; Mei, H; Mitselmakher, G; Rank, D; Shi, K; Sperka, D; Terentyev, N; Thomas, L; Wang, J; Wang, S; Yelton, J; Joshi, Y R; Linn, S; Markowitz, P; Rodriguez, J L; Ackert, A; Adams, T; Askew, A; Hagopian, S; Hagopian, V; Johnson, K F; Kolberg, T; Martinez, G; Perry, T; Prosper, H; Saha, A; Santra, A; Sharma, V; Yohay, R; Baarmand, M M; Bhopatkar, V; Colafranceschi, S; Hohlmann, M; Noonan, D; Roy, T; Yumiceva, F; Adams, M R; Apanasevich, L; Berry, D; Betts, R R; Cavanaugh, R; Chen, X; Evdokimov, O; Gerber, C E; Hangal, D A; Hofman, D J; Jung, K; Kamin, J; Sandoval Gonzalez, I D; Tonjes, M B; Trauger, H; Varelas, N; Wang, H; Wu, Z; Zhang, J; Bilki, B; Clarida, W; Dilsiz, K; Durgut, S; Gandrajula, R P; Haytmyradov, M; Khristenko, V; Merlo, J-P; Mermerkaya, H; Mestvirishvili, A; Moeller, A; Nachtman, J; Ogul, H; Onel, Y; Ozok, F; Penzo, A; Snyder, C; Tiras, E; Wetzel, J; Yi, K; Blumenfeld, B; Cocoros, A; Eminizer, N; Fehling, D; Feng, L; Gritsan, A V; Maksimovic, P; Mantilla, C; Roskes, J; Sarica, U; Swartz, M; Xiao, M; You, C; Al-Bataineh, A; Baringer, P; Bean, A; Boren, S; Bowen, J; Castle, J; Khalil, S; Kropivnitskaya, A; Majumder, D; Mcbrayer, W; Murray, M; Royon, C; Sanders, S; Schmitz, E; Tapia Takaki, J D; Wang, Q; Ivanov, A; Kaadze, K; Maravin, Y; Mohammadi, A; Saini, L K; Skhirtladze, N; Toda, S; Rebassoo, F; Wright, D; Anelli, C; Baden, A; Baron, O; Belloni, A; Calvert, B; Eno, S C; Feng, Y; Ferraioli, C; Hadley, N J; Jabeen, S; Jeng, G Y; Kellogg, R G; Kunkle, J; Mignerey, A C; Ricci-Tam, F; Shin, Y H; Skuja, A; Tonwar, S C; Abercrombie, D; Allen, B; Azzolini, V; Barbieri, R; Baty, A; Bi, R; Brandt, S; Busza, W; Cali, I A; D'Alfonso, M; Demiragli, Z; Gomez Ceballos, G; Goncharov, M; Hsu, D; Hu, M; Iiyama, Y; Innocenti, G M; Klute, M; Kovalskyi, D; Lai, Y S; Lee, Y-J; Levin, A; Luckey, P D; Maier, B; Marini, A C; Mcginn, C; Mironov, C; Narayanan, S; Niu, X; Paus, C; Roland, C; Roland, G; Salfeld-Nebgen, J; Stephans, G S F; Tatar, K; Velicanu, D; Wang, J; Wang, T W; Wyslouch, B; Benvenuti, A C; Chatterjee, R M; Evans, A; Hansen, P; Hiltbrand, J; Kalafut, S; Kubota, Y; Lesko, Z; Mans, J; Nourbakhsh, S; Ruckstuhl, N; Rusack, R; Turkewitz, J; Wadud, M A; Acosta, J G; Oliveros, S; Avdeeva, E; Bloom, K; Claes, D R; Fangmeier, C; Gonzalez Suarez, R; Kamalieddin, R; Kravchenko, I; Monroy, J; Siado, J E; Snow, G R; Stieger, B; Dolen, J; Godshalk, A; Harrington, C; Iashvili, I; Nguyen, D; Parker, A; Rappoccio, S; Roozbahani, B; Alverson, G; Barberis, E; Hortiangtham, A; Massironi, A; Morse, D M; Orimoto, T; Teixeira De Lima, R; Trocino, D; Wood, D; Bhattacharya, S; Charaf, O; Hahn, K A; Mucia, N; Odell, N; Pollack, B; Schmitt, M H; Sung, K; Trovato, M; Velasco, M; Dev, N; Hildreth, M; Hurtado Anampa, K; Jessop, C; Karmgard, D J; Kellams, N; Lannon, K; Loukas, N; Marinelli, N; Meng, F; Mueller, C; Musienko, Y; Planer, M; Reinsvold, A; Ruchti, R; Smith, G; Taroni, S; Wayne, M; Wolf, M; Woodard, A; Alimena, J; Antonelli, L; Bylsma, B; Durkin, L S; Flowers, S; Francis, B; Hart, A; Hill, C; Ji, W; Liu, B; Luo, W; Puigh, D; Winer, B L; Wulsin, H W; Cooperstein, S; Driga, O; Elmer, P; Hardenbrook, J; Hebda, P; Higginbotham, S; Lange, D; Luo, J; Marlow, D; Mei, K; Ojalvo, I; Olsen, J; Palmer, C; Piroué, P; Stickland, D; Tully, C; Malik, S; Norberg, S; Barker, A; Barnes, V E; Das, S; Folgueras, S; Gutay, L; Jha, M K; Jones, M; Jung, A W; Khatiwada, A; Miller, D H; Neumeister, N; Peng, C C; Qiu, H; Schulte, J F; Sun, J; Wang, F; Xie, W; Cheng, T; Parashar, N; Stupak, J; Adair, A; Chen, Z; Ecklund, K M; Freed, S; Geurts, F J M; Guilbaud, M; Kilpatrick, M; Li, W; Michlin, B; Northup, M; Padley, B P; Roberts, J; Rorie, J; Shi, W; Tu, Z; Zabel, J; Zhang, A; Bodek, A; de Barbaro, P; Demina, R; Duh, Y T; Ferbel, T; Galanti, M; Garcia-Bellido, A; Han, J; Hindrichs, O; Khukhunaishvili, A; Lo, K H; Tan, P; Verzetti, M; Ciesielski, R; Goulianos, K; Mesropian, C; Agapitos, A; Chou, J P; Gershtein, Y; Gómez Espinosa, T A; Halkiadakis, E; Heindl, M; Hughes, E; Kaplan, S; Kunnawalkam Elayavalli, R; Kyriacou, S; Lath, A; Montalvo, R; Nash, K; Osherson, M; Saka, H; Salur, S; Schnetzer, S; Sheffield, D; Somalwar, S; Stone, R; Thomas, S; Thomassen, P; Walker, M; Delannoy, A G; Foerster, M; Heideman, J; Riley, G; Rose, K; Spanier, S; Thapa, K; Bouhali, O; Castaneda Hernandez, A; Celik, A; Dalchenko, M; De Mattia, M; Delgado, A; Dildick, S; Eusebi, R; Gilmore, J; Huang, T; Kamon, T; Mueller, R; Pakhotin, Y; Patel, R; Perloff, A; Perniè, L; Rathjens, D; Safonov, A; Tatarinov, A; Ulmer, K A; Akchurin, N; Damgov, J; De Guio, F; Dudero, P R; Faulkner, J; Gurpinar, E; Kunori, S; Lamichhane, K; Lee, S W; Libeiro, T; Peltola, T; Undleeb, S; Volobouev, I; Wang, Z; Greene, S; Gurrola, A; Janjam, R; Johns, W; Maguire, C; Melo, A; Ni, H; Padeken, K; Sheldon, P; Tuo, S; Velkovska, J; Xu, Q; Arenton, M W; Barria, P; Cox, B; Hirosky, R; Joyce, M; Ledovskoy, A; Li, H; Neu, C; Sinthuprasith, T; Wang, Y; Wolfe, E; Xia, F; Harr, R; Karchin, P E; Poudyal, N; Sturdy, J; Thapa, P; Zaleski, S; Brodski, M; Buchanan, J; Caillol, C; Dasu, S; Dodd, L; Duric, S; Gomber, B; Grothe, M; Herndon, M; Hervé, A; Hussain, U; Klabbers, P; Lanaro, A; Levine, A; Long, K; Loveless, R; Polese, G; Ruggles, T; Savin, A; Smith, N; Smith, W H; Taylor, D; Woods, N

    2018-02-16

    An inclusive search for the standard model Higgs boson (H) produced with large transverse momentum (p_{T}) and decaying to a bottom quark-antiquark pair (bb[over ¯]) is performed using a data set of pp collisions at sqrt[s]=13  TeV collected with the CMS experiment at the LHC. The data sample corresponds to an integrated luminosity of 35.9  fb^{-1}. A highly Lorentz-boosted Higgs boson decaying to bb[over ¯] is reconstructed as a single, large radius jet, and it is identified using jet substructure and dedicated b tagging techniques. The method is validated with Z→bb[over ¯] decays. The Z→bb[over ¯] process is observed for the first time in the single-jet topology with a local significance of 5.1 standard deviations (5.8 expected). For a Higgs boson mass of 125 GeV, an excess of events above the expected background is observed (expected) with a local significance of 1.5 (0.7) standard deviations. The measured cross section times branching fraction for production via gluon fusion of H→bb[over ¯] with reconstructed p_{T}>450  GeV and in the pseudorapidity range -2.5<η<2.5 is 74±48(stat)_{-10}^{+17}(syst) fb, which is consistent within uncertainties with the standard model prediction.

  4. Evaluation of methods for measuring particulate matter emissions from gas turbines.

    PubMed

    Petzold, Andreas; Marsh, Richard; Johnson, Mark; Miller, Michael; Sevcenco, Yura; Delhaye, David; Ibrahim, Amir; Williams, Paul; Bauer, Heidi; Crayford, Andrew; Bachalo, William D; Raper, David

    2011-04-15

    The project SAMPLE evaluated methods for measuring particle properties in the exhaust of aircraft engines with respect to the development of standardized operation procedures for particulate matter measurement in aviation industry. Filter-based off-line mass methods included gravimetry and chemical analysis of carbonaceous species by combustion methods. Online mass methods were based on light absorption measurement or used size distribution measurements obtained from an electrical mobility analyzer approach. Number concentrations were determined using different condensation particle counters (CPC). Total mass from filter-based methods balanced gravimetric mass within 8% error. Carbonaceous matter accounted for 70% of gravimetric mass while the remaining 30% were attributed to hydrated sulfate and noncarbonaceous organic matter fractions. Online methods were closely correlated over the entire range of emission levels studied in the tests. Elemental carbon from combustion methods and black carbon from optical methods deviated by maximum 5% with respect to mass for low to medium emission levels, whereas for high emission levels a systematic deviation between online methods and filter based methods was found which is attributed to sampling effects. CPC based instruments proved highly reproducible for number concentration measurements with a maximum interinstrument standard deviation of 7.5%.

  5. On the variability of the Priestley-Taylor coefficient over water bodies

    NASA Astrophysics Data System (ADS)

    Assouline, Shmuel; Li, Dan; Tyler, Scott; Tanny, Josef; Cohen, Shabtai; Bou-Zeid, Elie; Parlange, Marc; Katul, Gabriel G.

    2016-01-01

    Deviations in the Priestley-Taylor (PT) coefficient αPT from its accepted 1.26 value are analyzed over large lakes, reservoirs, and wetlands where stomatal or soil controls are minimal or absent. The data sets feature wide variations in water body sizes and climatic conditions. Neither surface temperature nor sensible heat flux variations alone, which proved successful in characterizing αPT variations over some crops, explain measured deviations in αPT over water. It is shown that the relative transport efficiency of turbulent heat and water vapor is key to explaining variations in αPT over water surfaces, thereby offering a new perspective over the concept of minimal advection or entrainment introduced by PT. Methods that allow the determination of αPT based on low-frequency sampling (i.e., 0.1 Hz) are then developed and tested, which are usable with standard meteorological sensors that filter some but not all turbulent fluctuations. Using approximations to the Gram determinant inequality, the relative transport efficiency is derived as a function of the correlation coefficient between temperature and water vapor concentration fluctuations (RTq). The proposed approach reasonably explains the measured deviations from the conventional αPT = 1.26 value even when RTq is determined from air temperature and water vapor concentration time series that are Gaussian-filtered and subsampled to a cutoff frequency of 0.1 Hz. Because over water bodies, RTq deviations from unity are often associated with advection and/or entrainment, linkages between αPT and RTq offer both a diagnostic approach to assess their significance and a prognostic approach to correct the 1.26 value when using routine meteorological measurements of temperature and humidity.

  6. Rare events in networks with internal and external noise

    NASA Astrophysics Data System (ADS)

    Hindes, J.; Schwartz, I. B.

    2017-12-01

    We study rare events in networks with both internal and external noise, and develop a general formalism for analyzing rare events that combines pair-quenched techniques and large-deviation theory. The probability distribution, shape, and time scale of rare events are considered in detail for extinction in the Susceptible-Infected-Susceptible model as an illustration. We find that when both types of noise are present, there is a crossover region as the network size is increased, where the probability exponent for large deviations no longer increases linearly with the network size. We demonstrate that the form of the crossover depends on whether the endemic state is localized near the epidemic threshold or not.

  7. Analysis of plant hormones by microemulsion electrokinetic capillary chromatography coupled with on-line large volume sample stacking.

    PubMed

    Chen, Zongbao; Lin, Zian; Zhang, Lin; Cai, Yan; Zhang, Lan

    2012-04-07

    A novel method of microemulsion electrokinetic capillary chromatography (MEEKC) coupled with on-line large volume sample stacking was developed for the analysis of six plant hormones including indole-3-acetic acid, indole-3-butyric acid, indole-3-propionic acid, 1-naphthaleneacetic acid, abscisic acid and salicylic acid. Baseline separation of six plant hormones was achieved within 10 min by using the microemulsion background electrolyte containing a 97.2% (w/w) 10 mM borate buffer at pH 9.2, 1.0% (w/w) ethyl acetate as oil droplets, 0.6% (w/w) sodium dodecyl sulphate as surfactant and 1.2% (w/w) 1-butanol as cosurfactant. In addition, an on-line concentration method based on a large volume sample stacking technique and multiple wavelength detection was adopted for improving the detection sensitivity in order to determine trace level hormones in a real sample. The optimal method provided about 50-100 fold increase in detection sensitivity compared with a single MEEKC method, and the detection limits (S/N = 3) were between 0.005 and 0.02 μg mL(-1). The proposed method was simple, rapid and sensitive and could be applied to the determination of six plant hormones in spiked water samples, tobacco leaves and 1-naphthylacetic acid in leaf fertilizer. The recoveries ranged from 76.0% to 119.1%, and good reproducibilities were obtained with relative standard deviations (RSDs) less than 6.6%.

  8. Diagnostic accuracy of referral criteria for head circumference to detect hydrocephalus in the first year of life.

    PubMed

    van Dommelen, Paula; Deurloo, Jacqueline A; Gooskens, Rob H; Verkerk, Paul H

    2015-04-01

    Increased head circumference is often the first and main sign leading to the diagnosis of hydrocephalus. Our aim is to investigate the diagnostic accuracy of referral criteria for head circumference to detect hydrocephalus in the first year of life. A reference group with longitudinal head circumference data (n = 1938) was obtained from the Social Medical Survey of Children Attending Child Health Clinics study. The case group comprised infants with hydrocephalus treated in a tertiary pediatric hospital who had not already been detected during pregnancy (n = 125). Head circumference data were available for 43 patients. Head circumference data were standardized according to gestational age-specific references. Sensitivity and specificity of a very large head circumference (>2.5 standard deviations on the growth chart) were, respectively, 72.1% (95% confidence interval [CI]: 56.3-84.7) and 97.1% (95% CI:96.2-97.8). These figures were, respectively, 74.4% (95% CI: 58.8-86.5) and 93.0% (95% CI:91.8-94.1) for a large head circumference (>2.0 standard deviation), and 76.7% (95% CI:61.4-88.2) and 96.5% (95% CI:95.6-97.3) for a very large head circumference and/or a very large (>2.5 standard deviation) progressive growth of head circumference. A very large head circumference and/or a very large progressive growth of head circumference shows the best diagnostic accuracy to detect hydrocephalus at an early stage. Gestational age-specific growth charts are recommended. Further improvements may be possible by taking into account parental head circumference. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Distribution of velocities and acceleration for a particle in Brownian correlated disorder: Inertial case

    NASA Astrophysics Data System (ADS)

    Le Doussal, Pierre; Petković, Aleksandra; Wiese, Kay Jörg

    2012-06-01

    We study the motion of an elastic object driven in a disordered environment in presence of both dissipation and inertia. We consider random forces with the statistics of random walks and reduce the problem to a single degree of freedom. It is the extension of the mean-field Alessandro-Beatrice- Bertotti-Montorsi (ABBM) model in presence of an inertial mass m. While the ABBM model can be solved exactly, its extension to inertia exhibits complicated history dependence due to oscillations and backward motion. The characteristic scales for avalanche motion are studied from numerics and qualitative arguments. To make analytical progress, we consider two variants which coincide with the original model whenever the particle moves only forward. Using a combination of analytical and numerical methods together with simulations, we characterize the distributions of instantaneous acceleration and velocity, and compare them in these three models. We show that for large driving velocity, all three models share the same large-deviation function for positive velocities, which is obtained analytically for small and large m, as well as for m=6/25. The effect of small additional thermal and quantum fluctuations can be treated within an approximate method.

  10. Two-point method uncertainty during control and measurement of cylindrical element diameters

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Shalay, V. V.; Radev, H.

    2018-04-01

    The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.

  11. Protocol deviations before and after IV tPA in community hospitals

    PubMed Central

    Adelman, Eric E.; Scott, Phillip A.; Skolarus, Lesli E.; Fox, Allison K.; Frederiksen, Shirley M.; Meurer, William J.

    2015-01-01

    Background Protocol deviations before and after tPA treatment for ischemic stroke are common. It is unclear if patient or hospital factors predict protocol deviations. We examined predictors of protocol deviations and the effects of protocol violations on symptomatic intracerebral hemorrhage. Methods We used data from the INSTINCT trial, a cluster-randomized, controlled trial evaluating the efficacy of a barrier assessment and educational intervention to increase appropriate tPA use in 24 Michigan community hospitals, to review tPA treatments between 2007 and 2010. Protocol violations were defined as deviations from the standard tPA protocol, both before and after treatment. Multi-level logistic regression models were fitted to determine if patient and hospital variables were associated with pre-treatment or post-treatment protocol deviations. Results During the study, 557 patients (mean age 70; 52% male; median NIHSS 12) were treated with tPA. Protocol deviations occurred in 233 (42%) patients: 16% had pre-treatment deviations, 35% had post-treatment deviations, and 9% had both. The most common protocol deviations included elevated post-treatment blood pressure, antithrombotic agent use within 24 hours of treatment, and elevated pre-treatment blood pressure. Protocol deviations were not associated with symptomatic intracerebral hemorrhage, stroke severity, or hospital factors. Older age was associated with pre-treatment protocol deviations (adjusted OR 0.52; 95% confidence interval 0.30-0.92). Pre-treatment deviations were associated with post-treatment deviations (adjusted OR 3.20; 95% confidence interval 1.91-5.35). Conclusions Protocol deviations were not associated with symptomatic intracerebral hemorrhage. Aside from age, patient and hospital factors were not associated with protocol deviations. PMID:26419527

  12. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of chromium in water by graphite furnace atomic absorption spectrophotometry

    USGS Publications Warehouse

    McLain, B.J.

    1993-01-01

    Graphite furnace atomic absorption spectrophotometry is a sensitive, precise, and accurate method for the determination of chromium in natural water samples. The detection limit for this analytical method is 0.4 microg/L with a working linear limit of 25.0 microg/L. The precision at the detection limit ranges from 20 to 57 percent relative standard deviation (RSD) with an improvement to 4.6 percent RSD for concentrations more than 3 microg/L. Accuracy of this method was determined for a variety of reference standards that was representative of the analytical range. The results were within the established standard deviations. Samples were spiked with known concentrations of chromium with recoveries ranging from 84 to 122 percent. In addition, a comparison of data between graphite furnace atomic absorption spectrophotometry and direct-current plasma atomic emission spectrometry resulted in suitable agreement between the two methods, with an average deviation of +/- 2.0 microg/L throughout the analytical range.

  13. Particle Orbit Analysis in the Finite Beta Plasma of the Large Helical Device using Real Coordinates

    NASA Astrophysics Data System (ADS)

    Seki, Ryousuke; Matsumoto, Yutaka; Suzuki, Yasuhiro; Watanabe, Kiyomasa; Itagaki, Masafumi

    High-energy particles in a finite beta plasma of the Large Helical Device (LHD) are numerically traced in a real coordinate system. We investigate particle orbits by changing the beta value and/or the magnetic field strength. No significant difference is found in the particle orbit classifications between the vacuum magnetic field and the finite beta plasma cases. The deviation of a banana orbit from the flux surfaces strongly depends on the beta value, although the deviation of the orbit of a passing particle is independent of the beta value. In addition, the deviation of the orbit of the passing particle, rather than that of the banana-orbit particles, depends on the magnetic field strength. We also examine the effect of re-entering particles, which repeatedly pass in and out of the last closed flux surface, in the finite beta plasma of the LHD. It is found that the number of re-entering particles in the finite beta plasma is larger than that in the vacuum magnetic field. As a result, the role of reentering particles in the finite beta plasma of the LHD is more important than that in the vacuum magnetic field, and the effect of the charge-exchange reaction on particle confinement in the finite beta plasma is large.

  14. Not a Copernican observer: biased peculiar velocity statistics in the local Universe

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.; Nusser, Adi; Feix, Martin; Bilicki, Maciej

    2017-05-01

    We assess the effect of the local large-scale structure on the estimation of two-point statistics of the observed radial peculiar velocities of galaxies. A large N-body simulation is used to examine these statistics from the perspective of random observers as well as 'Local Group-like' observers conditioned to reside in an environment resembling the observed Universe within 20 Mpc. The local environment systematically distorts the shape and amplitude of velocity statistics with respect to ensemble-averaged measurements made by a Copernican (random) observer. The Virgo cluster has the most significant impact, introducing large systematic deviations in all the statistics. For a simple 'top-hat' selection function, an idealized survey extending to ˜160 h-1 Mpc or deeper is needed to completely mitigate the effects of the local environment. Using shallower catalogues leads to systematic deviations of the order of 50-200 per cent depending on the scale considered. For a flat redshift distribution similar to the one of the CosmicFlows-3 survey, the deviations are even more prominent in both the shape and amplitude at all separations considered (≲100 h-1 Mpc). Conclusions based on statistics calculated without taking into account the impact of the local environment should be revisited.

  15. Multi-Observation Continuous Density Hidden Markov Models for Anomaly Detection in Full Motion Video

    DTIC Science & Technology

    2012-06-01

    response profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.5 Method for measuring angular movement versus average direction...of movement 49 3.6 Method for calculating Angular Deviation, Θ . . . . . . . . . . . . . . . . . . 50 4.1 HMM produced by K Means Learning for agent H... Angular Deviation. A random variable, the difference in heading (in degrees) from the overall direction of movement over the sequence • S : Speed. A

  16. Analysis of change orders in geotechnical engineering work at INDOT.

    DOT National Transportation Integrated Search

    2011-01-01

    Change orders represent a cost to the State and to tax payers that is real and often extremely large because contractors tend to charge very large : amounts to any additional work that deviates from the work that was originally planned. Therefore, ef...

  17. Investigation of real tissue water equivalent path lengths using an efficient dose extinction method

    NASA Astrophysics Data System (ADS)

    Zhang, Rongxiao; Baer, Esther; Jee, Kyung-Wook; Sharp, Gregory C.; Flanz, Jay; Lu, Hsiao-Ming

    2017-07-01

    For proton therapy, an accurate conversion of CT HU to relative stopping power (RSP) is essential. Validation of the conversion based on real tissue samples is more direct than the current practice solely based on tissue substitutes and can potentially address variations over the population. Based on a novel dose extinction method, we measured water equivalent path lengths (WEPL) on animal tissue samples to evaluate the accuracy of CT HU to RSP conversion and potential variations over a population. A broad proton beam delivered a spread out Bragg peak to the samples sandwiched between a water tank and a 2D ion-chamber detector. WEPLs of the samples were determined from the transmission dose profiles measured as a function of the water level in the tank. Tissue substitute inserts and Lucite blocks with known WEPLs were used to validate the accuracy. A large number of real tissue samples were measured. Variations of WEPL over different batches of tissue samples were also investigated. The measured WEPLs were compared with those computed from CT scans with the Stoichiometric calibration method. WEPLs were determined within  ±0.5% percentage deviation (% std/mean) and  ±0.5% error for most of the tissue surrogate inserts and the calibration blocks. For biological tissue samples, percentage deviations were within  ±0.3%. No considerable difference (<1%) in WEPL was observed for the same type of tissue from different sources. The differences between measured WEPLs and those calculated from CT were within 1%, except for some bony tissues. Depending on the sample size, each dose extinction measurement took around 5 min to produce ~1000 WEPL values to be compared with calculations. This dose extinction system measures WEPL efficiently and accurately, which allows the validation of CT HU to RSP conversions based on the WEPL measured for a large number of samples and real tissues.

  18. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.

    PubMed

    Bressloff, Paul C

    2015-01-01

    We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.

  19. Finite-Difference Modeling of Seismic Wave Scattering in 3D Heterogeneous Media: Generation of Tangential Motion from an Explosion Source

    NASA Astrophysics Data System (ADS)

    Hirakawa, E. T.; Pitarka, A.; Mellors, R. J.

    2015-12-01

    Evan Hirakawa, Arben Pitarka, and Robert Mellors One challenging task in explosion seismology is development of physical models for explaining the generation of S-waves during underground explosions. Pitarka et al. (2015) used finite difference simulations of SPE-3 (part of Source Physics Experiment, SPE, an ongoing series of underground chemical explosions at the Nevada National Security Site) and found that while a large component of shear motion was generated directly at the source, additional scattering from heterogeneous velocity structure and topography are necessary to better match the data. Large-scale features in the velocity model used in the SPE simulations are well constrained, however, small-scale heterogeneity is poorly constrained. In our study we used a stochastic representation of small-scale variability in order to produce additional high-frequency scattering. Two methods for generating the distributions of random scatterers are tested. The first is done in the spatial domain by essentially smoothing a set of random numbers over an ellipsoidal volume using a Gaussian weighting function. The second method consists of filtering a set of random numbers in the wavenumber domain to obtain a set of heterogeneities with a desired statistical distribution (Frankel and Clayton, 1986). This method is capable of generating distributions with either Gaussian or von Karman autocorrelation functions. The key parameters that affect scattering are the correlation length, the standard deviation of velocity for the heterogeneities, and the Hurst exponent, which is only present in the von Karman media. Overall, we find that shorter correlation lengths as well as higher standard deviations result in increased tangential motion in the frequency band of interest (0 - 10 Hz). This occurs partially through S-wave refraction, but mostly by P-S and Rg-S waves conversions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  20. In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.

    PubMed

    Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J

    2004-01-01

    In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.

  1. Visual space under free viewing conditions.

    PubMed

    Doumen, Michelle J A; Kappers, Astrid M L; Koenderink, Jan J

    2005-10-01

    Most research on visual space has been done under restricted viewing conditions and in reduced environments. In our experiments, observers performed an exocentric pointing task, a collinearity task, and a parallelity task in a entirely visible room. We varied the relative distances between the objects and the observer and the separation angle between the two objects. We were able to compare our data directly with data from experiments in an environment with less monocular depth information present. We expected that in a richer environment and under less restrictive viewing conditions, the settings would deviate less from the veridical settings. However, large systematic deviations from veridical settings were found for all three tasks. The structure of these deviations was task dependent, and the structure and the deviations themselves were comparable to those obtained under more restricted circumstances. Thus, the additional information was not used effectively by the observers.

  2. Astronaut mass measurement using linear acceleration method and the effect of body non-rigidity

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Li, LuMing; Hu, ChunHua; Chen, Hao; Hao, HongWei

    2011-04-01

    Astronaut's body mass is an essential factor of health monitoring in space. The latest mass measurement device for the International Space Station (ISS) has employed a linear acceleration method. The principle of this method is that the device generates a constant pulling force, and the astronaut is accelerated on a parallelogram motion guide which rotates at a large radius to achieve a nearly linear trajectory. The acceleration is calculated by regression analysis of the displacement versus time trajectory and the body mass is calculated by using the formula m= F/ a. However, in actual flight, the device is instable that the deviation between runs could be 6-7 kg. This paper considers the body non-rigidity as the major cause of error and instability and analyzes the effects of body non-rigidity from different aspects. Body non-rigidity makes the acceleration of the center of mass (C.M.) oscillate and fall behind the point where force is applied. Actual acceleration curves showed that the overall effect of body non-rigidity is an oscillation at about 7 Hz and a deviation of about 25%. To enhance body rigidity, better body restraints were introduced and a prototype based on linear acceleration method was built. Measurement experiment was carried out on ground on an air table. Three human subjects weighing 60-70 kg were measured. The average variance was 0.04 kg and the average measurement error was 0.4%. This study will provide reference for future development of China's own mass measurement device.

  3. New method to determine the total carbonyl functional group content in extractable particulate organic matter by tandem mass spectrometry.

    PubMed

    Dron, J; Zheng, W; Marchand, N; Wortham, H

    2008-08-01

    A functional group analysis method was developed to determine the quantitative content of carbonyl functional groups in atmospheric particulate organic matter (POM) using constant neutral loss scanning-tandem mass spectrometry (CNLS-MS/MS). The neutral loss method consists in monitoring the loss of a neutral fragment produced by the fragmentation of a precursor ion in a collision cell. The only ions detected are the daughter ions resulting from the loss of the neutral fragment under study. Then, scanning the loss of a neutral fragment characteristic of a functional group enables the selective detection of the compounds bearing the chemical function under study within a complex mixture. The selective detection of carbonyl functional groups was achieved after derivatization with pentafluorophenylhydrazine (PFPH) by monitoring the neutral loss of C(6)F(5)N (181 amu), which was characteristic of a large panel of derivatized carbonyl compounds. The method was tested on 25 reference mixtures of different composition, all containing 24 carbonyl compounds at randomly determined concentrations. The repeatability and calibration tests were satisfying as they resulted in a relative standard deviation below 5% and a linear range between 0.01 and 0.65 mM with a calculated detection limit of 0.0035 mM. Also, the relative deviation induced by changing the composition of the mixture while keeping the total concentration of carbonyl functional groups constant was less than 20%. These reliability experiments demonstrate the high robustness of the developed procedure for accurate carbonyl functional group measurement, which was applied to atmospheric POM samples. Copyright (c) 2008 John Wiley & Sons, Ltd.

  4. A study of the 3D radiative transfer effect in cloudy atmospheres

    NASA Astrophysics Data System (ADS)

    Okata, M.; Teruyuki, N.; Suzuki, K.

    2015-12-01

    Evaluation of the effect of clouds in the atmosphere is a significant problem in the Earth's radiation budget study with their large uncertainties of microphysics and the optical properties. In this situation, we still need more investigations of 3D cloud radiative transer problems using not only models but also satellite observational data.For this purpose, we have developed a 3D-Monte-Carlo radiative transfer code that is implemented with various functions compatible with the OpenCLASTR R-Star radiation code for radiance and flux computation, i.e. forward and backward tracing routines, non-linear k-distribution parameterization (Sekiguchi and Nakajima, 2008) for broad band solar flux calculation, and DM-method for flux and TMS-method for upward radiance (Nakajima and Tnaka 1998). We also developed a Minimum cloud Information Deviation Profiling Method (MIDPM) as a method for a construction of 3D cloud field with MODIS/AQUA and CPR/CloudSat data. We then selected a best-matched radar reflectivity factor profile from the library for each of off-nadir pixels of MODIS where CPR profile is not available, by minimizing the deviation between library MODIS parameters and those at the pixel. In this study, we have used three cloud microphysical parameters as key parameters for the MIDPM, i.e. effective particle radius, cloud optical thickness and top of cloud temperature, and estimated 3D cloud radiation budget. We examined the discrepancies between satellite observed and mode-simulated radiances and three cloud microphysical parameter's pattern for studying the effects of cloud optical and microphysical properties on the radiation budget of the cloud-laden atmospheres.

  5. Effects of temperature and precipitation variability on the risk of violence in sub-Saharan Africa, 1980–2012

    PubMed Central

    O’Loughlin, John; Linke, Andrew M.; Witmer, Frank D. W.

    2014-01-01

    Ongoing debates in the academic community and in the public policy arena continue without clear resolution about the significance of global climate change for the risk of increased conflict. Sub-Saharan Africa is generally agreed to be the region most vulnerable to such climate impacts. Using a large database of conflict events and detailed climatological data covering the period 1980–2012, we apply a multilevel modeling technique that allows for a more nuanced understanding of a climate–conflict link than has been seen heretofore. In the aggregate, high temperature extremes are associated with more conflict; however, different types of conflict and different subregions do not show consistent relationship with temperature deviations. Precipitation deviations, both high and low, are generally not significant. The location and timing of violence are influenced less by climate anomalies (temperature or precipitation variations from normal) than by key political, economic, and geographic factors. We find important distinctions in the relationship between temperature extremes and conflict by using multiple methods of analysis and by exploiting our time-series cross-sectional dataset for disaggregated analyses. PMID:25385621

  6. Model based rib-cage unfolding for trauma CT

    NASA Astrophysics Data System (ADS)

    von Berg, Jens; Klinder, Tobias; Lorenz, Cristian

    2018-03-01

    A CT rib-cage unfolding method is proposed that does not require to determine rib centerlines but determines the visceral cavity surface by model base segmentation. Image intensities are sampled across this surface that is flattened using a model based 3D thin-plate-spline registration. An average rib centerline model projected onto this surface serves as a reference system for registration. The flattening registration is designed so that ribs similar to the centerline model are mapped onto parallel lines preserving their relative length. Ribs deviating from this model appear deviating from straight parallel ribs in the unfolded view, accordingly. As the mapping is continuous also the details in intercostal space and those adjacent to the ribs are rendered well. The most beneficial application area is Trauma CT where a fast detection of rib fractures is a crucial task. Specifically in trauma, automatic rib centerline detection may not be guaranteed due to fractures and dislocations. The application by visual assessment on the large public LIDC data base of lung CT proved general feasibility of this early work.

  7. Off-design computer code for calculating the aerodynamic performance of axial-flow fans and compressors

    NASA Technical Reports Server (NTRS)

    Schmidt, James F.

    1995-01-01

    An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.

  8. High-precision temperature control and stabilization using a cryocooler.

    PubMed

    Hasegawa, Yasuhiro; Nakamura, Daiki; Murata, Masayuki; Yamamoto, Hiroya; Komine, Takashi

    2010-09-01

    We describe a method for precisely controlling temperature using a Gifford-McMahon (GM) cryocooler that involves inserting fiber-reinforced-plastic dampers into a conventional cryosystem. Temperature fluctuations in a GM cryocooler without a large heat bath or a stainless-steel damper at 4.2 K are typically of the order of 200 mK. It is particularly difficult to control the temperature of a GM cryocooler at low temperatures. The fiber-reinforced-plastic dampers enabled us to dramatically reduce temperature fluctuations at low temperatures. A standard deviation of the temperature fluctuations of 0.21 mK could be achieved when the temperature was controlled at 4.200 0 K using a feedback temperature control system with two heaters. Adding the dampers increased the minimum achievable temperature from 3.2 to 3.3 K. Precise temperature control between 4.200 0 and 300.000 K was attained using the GM cryocooler, and the standard deviation of the temperature fluctuations was less than 1.2 mK even at 300 K. This technique makes it possible to control and stabilize the temperature using a GM cryocooler.

  9. Analysis of the stress field and strain rate in Zagros-Makran transition zone

    NASA Astrophysics Data System (ADS)

    Ghorbani Rostam, Ghasem; Pakzad, Mehrdad; Mirzaei, Noorbakhsh; Sakhaei, Seyed Reza

    2018-01-01

    Transition boundary between Zagros continental collision and Makran oceanic-continental subduction can be specified by two wide limits: (a) Oman Line is the seismicity boundary with a sizeable reduction in seismicity rate from Zagros in the west to Makran in the east; and (b) the Zendan-Minab-Palami (ZMP) fault system is believed to be a prominent tectonic boundary. The purpose of this paper is to analyze the stress field in the Zagros-Makran transition zone by the iterative joint inversion method developed by Vavrycuk (Geophysical Journal International 199:69-77, 2014). The results suggest a rather uniform pattern of the stress field around these two boundaries. We compare the results with the strain rates obtained from the Global Positioning System (GPS) network stations. In most cases, the velocity vectors show a relatively good agreement with the stress field except for the Bandar Abbas (BABS) station which displays a relatively large deviation between the stress field and the strain vector. This deviation probably reflects a specific location of the BABS station being in the transition zone between Zagros continental collision and Makran subduction zones.

  10. Truncated Linear Statistics Associated with the Eigenvalues of Random Matrices II. Partial Sums over Proper Time Delays for Chaotic Quantum Dots

    NASA Astrophysics Data System (ADS)

    Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe

    2017-06-01

    Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p

  11. Influence of occlusal plane inclination and mandibular deviation on esthetics

    PubMed Central

    Corte, Cristiane Cherobini Dalla; da Silveira, Bruno Lopes; Marquezan, Mariana

    2015-01-01

    Objective: The aim of this study was to assess the degree of perception of occlusal plane inclination and mandibular deviation in facial esthetics, assessed by laypeople, dentists and orthodontists. Methods: A woman with 5.88° of inclination and 5.54 mm of mandibular deviation was selected and, based on her original photograph, four new images were created correcting the deviations and creating more symmetric faces and smiles. Examiners assessed the images by means of a questionnaire. Their opinions were compared by qualitative and quantitative analyses. Results: A total of 45 laypeople, 27 dentists and 31 orthodontists filled out the questionnaires. All groups were able to perceive the asymmetry; however, orthodontists were more sensitive, identifying asymmetries as from 4.32° of occlusal plane inclination and 4.155 mm of mandibular deviation (p< 0.05). The other categories of evaluators identified asymmetries and assigned significantly lower grades, starting from 5.88° of occlusal plane inclination and 5.54 mm of mandibular deviation (p< 0.05). Conclusion: Occlusal plane inclination and mandibular deviation were perceived by all groups, but orthodontists presented higher perception of deviations. PMID:26560821

  12. Individual case photogrammetric calibration of the Hirschberg Ratio (HR) for corneal light reflection test strabometry.

    PubMed

    Romano, Paul E

    2006-01-01

    The HR (prism diopters [PD] per mm of corneal light reflection test [CLRT] asymmetry for strabometry) varies in humans from 14 to 24 PD/mm, but is totally unpredictable. Photo(grammetric) HR calibration in (of) each case facilitates acceptable strabometry precision and accuracy. Take 3 flash photos of the patient with both the preferred eye and then the deviating eye fixating straight ahead and then again with the deviation eye fixing at (+/-5-10 PD) the strabismic angle on a metric rule (stick) one meter away from the camera lens (where 1 cm = 1 PD). On these 3 photos, make four precise measurements of the position of the CLR with reference to the limbus: In the deviating eye fixing straight ahead and fixating at the angle of deviation. Divide the mm difference in location into the change in the angle of fixation to determine the HR for this patient at this angle. Then determine the CLR position in both the deviating eye and the fixing eye in the straight ahead primary position picture. Apply the calculated calibrated HR to the asymmetry of the CLRs in primary position to determine the true strabismic deviation. This imaging method insures accurate Hirschberg CLRT strabometry in each case, determining the deviation in "free space", under conditions of normal binocular viewing, uncontaminated by the artifacts or inaccuracies of other conventional strabometric methods or devices. So performed, the Hirschberg CLRT is the gold standard of strabometry.

  13. Comparison of different methods for the in situ measurement of forest litter moisture content

    NASA Astrophysics Data System (ADS)

    Schunk, C.; Ruth, B.; Leuchner, M.; Wastl, C.; Menzel, A.

    2015-06-01

    Dead fine fuel (e.g. litter) moisture content is an important parameter for both forest fire and ecological applications as it is related to ignitability, fire behavior as well as soil respiration. However, the comprehensive literature review in this paper shows that there is no easy-to-use method for automated measurements available. This study investigates the applicability of four different sensor types (permittivity and electrical resistance measuring principles) for this measurement. Comparisons were made to manual gravimetric reference measurements carried out almost daily for one fire season and overall agreement was good (highly significant correlations with 0.792 ≦ r ≦ 0.947). Standard deviations within sensor types were linearly correlated to daily sensor mean values; however, above a certain threshold they became irregular, which may be linked to exceedance of the working ranges. Thus, measurements with irregular standard deviations were considered unusable and calibrations of all individual sensors were compared for useable periods. A large drift in the sensor raw value-litter moisture-relationship became obvious from drought to drought period. This drift may be related to installation effects or settling and decomposition of the litter layer throughout the fire season. Because of the drift and the in situ calibration necessary, it cannot be recommended to use the methods presented here for monitoring purposes. However, they may be interesting for scientific studies when some manual fuel moisture measurements are made anyway. Additionally, a number of potential methodological improvements are suggested.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    St James, S; Bloch, C; Saini, J

    Purpose: Proton pencil beam scanning is used clinically across the United States. There are no current guidelines on tolerances for daily QA specific to pencil beam scanning, specifically related to the individual spot properties (spot width). Using a stochastic method to determine tolerances has the potential to optimize tolerances on individual spots and decrease the number of false positive failures in daily QA. Individual and global spot tolerances were evaluated. Methods: As part of daily QA for proton pencil beam scanning, a field of 16 spots (corresponding to 8 energies) is measured using an array of ion chambers (Matrixx, IBA).more » Each individual spot is fit to two Gaussian functions (x,y). The spot width (σ) in × and y are recorded (32 parameters). Results from the daily QA were retrospectively analyzed for 100 days of data. The deviations of the spot widths were histogrammed and fit to a Gaussian function. The stochastic spot tolerance was taken to be the mean ± 3σ. Using these results, tolerances were developed and tested against known deviations in spot width. Results: The individual spot tolerances derived with the stochastic method decreased in 30/32 instances. Using the previous tolerances (± 20% width), the daily QA would have detected 0/20 days of the deviation. Using a tolerance of any 6 spots failing the stochastic tolerance, 18/20 days of the deviation would have been detected. Conclusion: Using a stochastic method we have been able to decrease daily tolerances on the spot widths for 30/32 spot widths measured. The stochastic tolerances can lead to detection of deviations that previously would have been picked up on monthly QA and missed by daily QA. This method could be easily extended for evaluation of other QA parameters in proton spot scanning.« less

  15. Improving IQ measurement in intellectual disabilities using true deviation from population norms

    PubMed Central

    2014-01-01

    Background Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. Methods We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. Results We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Conclusion Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment. PMID:26491488

  16. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by the light and heavy rain rates. This is useful in estimating the fraction of rainfall contributed by the rain rates that go undetected by the radar. The results at high rain rates provide a cross-check on the usual attenuation correction methods that are applied at the highest resolution of the instrument.

  17. Rapidly rotating neutron stars with a massive scalar field—structure and universal relations

    NASA Astrophysics Data System (ADS)

    Doneva, Daniela D.; Yazadjiev, Stoytcho S.

    2016-11-01

    We construct rapidly rotating neutron star models in scalar-tensor theories with a massive scalar field. The fact that the scalar field has nonzero mass leads to very interesting results since the allowed range of values of the coupling parameters is significantly broadened. Deviations from pure general relativity can be very large for values of the parameters that are in agreement with the observations. We found that the rapid rotation can magnify the differences several times compared to the static case. The universal relations between the normalized moment of inertia and quadrupole moment are also investigated both for the slowly and rapidly rotating cases. The results show that these relations are still EOS independent up to a large extend and the deviations from pure general relativity can be large. This places the massive scalar-tensor theories amongst the few alternative theories of gravity that can be tested via the universal I-Love-Q relations.

  18. Optimized method for the quantification of pyruvic acid in onions by microplate reader and confirmation by high resolution mass spectra.

    PubMed

    Metrani, Rita; Jayaprakasha, G K; Patil, Bhimanagouda S

    2018-03-01

    The present study describes the rapid microplate method to determine pyruvic acid content in different varieties of onions. Onion juice was treated with 2,4-dinitrophenylhydrazine to obtain hydrazone, which was further treated with potassium hydroxide to get stable colored complex. The stability of potassium complex was enhanced up to two hours and the structures of hydrazones were confirmed by LC-MS for the first time. The developed method was optimized by testing different bases, acids with varying concentrations of dinitrophenyl hydrazine to get stable color and results were comparable to developed method. Repeatability and precision showed <9% relative standard deviation. Moreover, sweet onion juice was stored for four weeks at different temperatures for the stability; the pyruvate remained stable at all temperatures except at 25°C. Thus, the developed method has good potential to determine of pungency in large number of onions in a short time using minimal amount of reagents. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  20. Evaluation of three different validation procedures regarding the accuracy of template-guided implant placement: an in vitro study.

    PubMed

    Vasak, Christoph; Strbac, Georg D; Huber, Christian D; Lettner, Stefan; Gahleitner, André; Zechner, Werner

    2015-02-01

    The study aims to evaluate the accuracy of the NobelGuide™ (Medicim/Nobel Biocare, Göteborg, Sweden) concept maximally reducing the influence of clinical and surgical parameters. Moreover, the study was to compare and validate two validation procedures versus a reference method. Overall, 60 implants were placed in 10 artificial edentulous mandibles according to the NobelGuide™ protocol. For merging the pre- and postoperative DICOM data sets, three different fusion methods (Triple Scan Technique, NobelGuide™ Validation software, and AMIRA® software [VSG - Visualization Sciences Group, Burlington, MA, USA] as reference) were applied. Discrepancies between the virtual and the actual implant positions were measured. The mean deviations measured with AMIRA® were 0.49 mm (implant shoulder), 0.69 mm (implant apex), and 1.98°mm (implant axis). The Triple Scan Technique as well as the NobelGuide™ Validation software revealed similar deviations compared with the reference method. A significant correlation between angular and apical deviations was seen (r = 0.53; p < .001). A greater implant diameter was associated with greater deviations (p = .03). The Triple Scan Technique as a system-independent validation procedure as well as the NobelGuide™ Validation software are in accordance with the AMIRA® software. The NobelGuide™ system showed similar or less spatial and angular deviations compared with others. © 2013 Wiley Periodicals, Inc.

  1. Selection of vegetation indices for mapping the sugarcane condition around the oil and gas field of North West Java Basin, Indonesia

    NASA Astrophysics Data System (ADS)

    Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus

    2018-05-01

    Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.

  2. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method.

    PubMed

    HosseiniAliabadi, S J; Hosseini Pooya, S M; Afarideh, H; Mianji, F

    2015-06-01

    The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. This system can be utilized in large scale environmental monitoring with a higher accuracy.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  4. Persistent stability of a chaotic system

    NASA Astrophysics Data System (ADS)

    Huber, Greg; Pradas, Marc; Pumir, Alain; Wilkinson, Michael

    2018-02-01

    We report that trajectories of a one-dimensional model for inertial particles in a random velocity field can remain stable for a surprisingly long time, despite the fact that the system is chaotic. We provide a detailed quantitative description of this effect by developing the large-deviation theory for fluctuations of the finite-time Lyapunov exponent of this system. Specifically, the determination of the entropy function for the distribution reduces to the analysis of a Schrödinger equation, which is tackled by semi-classical methods. The system has 'generic' instability properties, and we consider the broader implications of our observation of long-term stability in chaotic systems.

  5. Performance of the PARCS Testbed Cesium Fountain Frequency Standard

    NASA Technical Reports Server (NTRS)

    Enzer, Daphna G.; Klipstein, William M.

    2004-01-01

    A cesium fountain frequency standard has been developed as a ground testbed for the PARCS (Primary Atomic Reference Clock in Space) experiment, an experiment intended to fly on the International Space Station. We report on the performance of the fountain and describe some of the implementations motivated in large part by flight considerations, but of relevance for ground fountains. In particular, we report on a new technique for delivering cooling and trapping laser beams to the atom collection region, in which a given beam is recirculated three times effectively providing much more optical power than traditional configurations. Allan deviations down to 10 have been achieved with this method.

  6. Rosin-enabled ultraclean and damage-free transfer of graphene for large-area flexible organic light-emitting diodes

    PubMed Central

    Zhang, Zhikun; Du, Jinhong; Zhang, Dingdong; Sun, Hengda; Yin, Lichang; Ma, Laipeng; Chen, Jiangshan; Ma, Dongge; Cheng, Hui-Ming; Ren, Wencai

    2017-01-01

    The large polymer particle residue generated during the transfer process of graphene grown by chemical vapour deposition is a critical issue that limits its use in large-area thin-film devices such as organic light-emitting diodes. The available lighting areas of the graphene-based organic light-emitting diodes reported so far are usually <1 cm2. Here we report a transfer method using rosin as a support layer, whose weak interaction with graphene, good solubility and sufficient strength enable ultraclean and damage-free transfer. The transferred graphene has a low surface roughness with an occasional maximum residue height of about 15 nm and a uniform sheet resistance of 560 Ω per square with about 1% deviation over a large area. Such clean, damage-free graphene has produced the four-inch monolithic flexible graphene-based organic light-emitting diode with a high brightness of about 10,000 cd m−2 that can already satisfy the requirements for lighting sources and displays. PMID:28233778

  7. SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N; Chetty, I; Snyder, K

    Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL)more » tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less

  8. A large deviations principle for stochastic flows of viscous fluids

    NASA Astrophysics Data System (ADS)

    Cipriano, Fernanda; Costa, Tiago

    2018-04-01

    We study the well-posedness of a stochastic differential equation on the two dimensional torus T2, driven by an infinite dimensional Wiener process with drift in the Sobolev space L2 (0 , T ;H1 (T2)) . The solution corresponds to a stochastic Lagrangian flow in the sense of DiPerna Lions. By taking into account that the motion of a viscous incompressible fluid on the torus can be described through a suitable stochastic differential equation of the previous type, we study the inviscid limit. By establishing a large deviations principle, we show that, as the viscosity goes to zero, the Lagrangian stochastic Navier-Stokes flow approaches the Euler deterministic Lagrangian flow with an exponential rate function.

  9. On Deviations between Observed and Theoretically Estimated Values on Additivity-Law Failures

    NASA Astrophysics Data System (ADS)

    Nayatani, Yoshinobu; Sobagaki, Hiroaki

    The authors have reported in the previous studies that the average observed results are about a half of the corresponding predictions on the experiments with large additivity-law failures. One of the reasons of the deviations is studied and clarified by using the original observed data on additivity-law failures in the Nakano experiment. The conclusion from the observations and their analyses clarified that it was essentially difficult to have a good agreement between the average observed results and the corresponding theoretical predictions in the experiments with large additivity-law failures. This is caused by a kind of unavoidable psychological pressure existing in subjects participated in the experiments. We should be satisfied with the agreement in trend between them.

  10. Exact Large-Deviation Statistics for a Nonequilibrium Quantum Spin Chain

    NASA Astrophysics Data System (ADS)

    Žnidarič, Marko

    2014-01-01

    We consider a one-dimensional XX spin chain in a nonequilibrium setting with a Lindblad-type boundary driving. By calculating large-deviation rate function in the thermodynamic limit, a generalization of free energy to a nonequilibrium setting, we obtain a complete distribution of current, including closed expressions for lower-order cumulants. We also identify two phase-transition-like behaviors in either the thermodynamic limit, at which the current probability distribution becomes discontinuous, or at maximal driving, when the range of possible current values changes discontinuously. In the thermodynamic limit the current has a finite upper and lower bound. We also explicitly confirm nonequilibrium fluctuation relation and show that the current distribution is the same under mapping of the coupling strength Γ→1/Γ.

  11. Quality assurance of proton beams using a multilayer ionization chamber system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhanesar, Sandeep; Sahoo, Narayan; Kerr, Matthew

    2013-09-15

    Purpose: The measurement of percentage depth-dose (PDD) distributions for the quality assurance of clinical proton beams is most commonly performed with a computerized water tank dosimetry system with ionization chamber, commonly referred to as water tank. Although the accuracy and reproducibility of this method is well established, it can be time-consuming if a large number of measurements are required. In this work the authors evaluate the linearity, reproducibility, sensitivity to field size, accuracy, and time-savings of another system: the Zebra, a multilayer ionization chamber system.Methods: The Zebra, consisting of 180 parallel-plate ionization chambers with 2 mm resolution, was used tomore » measure depth-dose distributions. The measurements were performed for scattered and scanned proton pencil beams of multiple energies delivered by the Hitachi PROBEAT synchrotron-based delivery system. For scattered beams, the Zebra-measured depth-dose distributions were compared with those measured with the water tank. The principal descriptors extracted for comparisons were: range, the depth of the distal 90% dose; spread-out Bragg peak (SOBP) length, the region between the proximal 95% and distal 90% dose; and distal-dose fall off (DDF), the region between the distal 80% and 20% dose. For scanned beams, the Zebra-measured ranges were compared with those acquired using a Bragg peak chamber during commissioning.Results: The Zebra demonstrated better than 1% reproducibility and monitor unit linearity. The response of the Zebra was found to be sensitive to radiation field sizes greater than 12.5 × 12.5 cm; hence, the measurements used to determine accuracy were performed using a field size of 10 × 10 cm. For the scattered proton beams, PDD distributions showed 1.5% agreement within the SOBP, and 3.8% outside. Range values agreed within −0.1 ± 0.4 mm, with a maximum deviation of 1.2 mm. SOBP length values agreed within 0 ± 2 mm, with a maximum deviation of 6 mm. DDF values agreed within 0.3 ± 0.1 mm, with a maximum deviation of 0.6 mm. For the scanned proton pencil beams, Zebra and Bragg peak chamber range values demonstrated agreement of 0.0 ± 0.3 mm with a maximum deviation of 1.3 mm. The setup and measurement time for all Zebra measurements was 3 and 20 times less, respectively, compared to the water tank measurements.Conclusions: Our investigation shows that the Zebra can be useful not only for fast but also for accurate measurements of the depth-dose distributions of both scattered and scanned proton beams. The analysis of a large set of measurements shows that the commonly assessed beam quality parameters obtained with the Zebra are within the acceptable variations specified by the manufacturer for our delivery system.« less

  12. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    NASA Astrophysics Data System (ADS)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  13. Some limit theorems for ratios of order statistics from uniform random variables.

    PubMed

    Xu, Shou-Fang; Miao, Yu

    2017-01-01

    In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.

  14. Toward unbiased determination of the redshift evolution of Lyman-alpha forest clouds

    NASA Technical Reports Server (NTRS)

    Lu, Limin; Zuo, Lin

    1994-01-01

    The possibility of using D(sub A), the mean depression of a quasar spectrum due to Ly-alpha forest absorption, to study the number density evolution of the Ly-alpha forest clouds is examined in some detail. Current D(sub A) measurements are made against a continuum that is a power-law extrapolation from the continuum longward of Ly-alpha emission. Compared to the line-counting approach, the D(sub A)-method has the advantage that the D(sub A) measurements are not affected by line-blending effects. However, we find using low-redshift quasar spectra obtained with the Hubble Space Telescope (HST), where the true continuum in the Ly-alpha forest can be estimated fairly reliably because of the much lower density of the Ly-alpha forest lines, that the extrapolated continuum often deviates systematically from the true continuum in the forest region. Such systematic continuum errors introduce large errors in the D(sub A) measurements. The current D(sub A) measurements may also be significantly biased by the possible presence of the Gunn-Peterson absorption. We propose a modification to the existing D(sub A)-method, namely, to measure D(sub A) against a locally established continuum in the Ly-alpha forest. Under conditions that the quasar spectrum has good resolution and S/N to allow for a reliable estimate of the local continuum in the Ly-alpha forest, the modified D(sub A) measurements should be largely free of the systematic uncertainties suffered by the existing D(sub A) measurements. We also introduce a formalism based on the work of Zuo (1993) to simplify the application of the D(sub A)-method(s) to real data. We discuss the merits and limitations of the modified D(sub A)-method, and conclude that it is a useful alternative. Our findings that the extrapolated continuum from longward of Ly-alpha emission often deviates systematically from the true continuum in the Ly-alpha forest present a major problem in the study of the Gunn-Peterson absorption.

  15. Evaluation of a Wipe Surface Sample Method for Collection of Bacillus Spores from Nonporous Surfaces▿

    PubMed Central

    Brown, Gary S.; Betty, Rita G.; Brockmann, John E.; Lucero, Daniel A.; Souza, Caroline A.; Walsh, Kathryn S.; Boucher, Raymond M.; Tezak, Mathew; Wilson, Mollye C.; Rudolph, Todd

    2007-01-01

    Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of ±0.12, and for painted wallboard it was 0.29 with a standard deviation of ±0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of ±0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis. PMID:17122390

  16. Evaluation of a wipe surface sample method for collection of Bacillus spores from nonporous surfaces.

    PubMed

    Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Mathew; Wilson, Mollye C; Rudolph, Todd

    2007-02-01

    Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of +/-0.12, and for painted wallboard it was 0.29 with a standard deviation of +/-0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of +/-0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis.

  17. Skewness and kurtosis analysis for non-Gaussian distributions

    NASA Astrophysics Data System (ADS)

    Celikoglu, Ahmet; Tirnakli, Ugur

    2018-06-01

    In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.

  18. Uncertainty Quantification of GEOS-5 L-band Radiative Transfer Model Parameters Using Bayesian Inference and SMOS Observations

    NASA Technical Reports Server (NTRS)

    DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.

    2013-01-01

    Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).

  19. Efficient and Unbiased Sampling of Biomolecular Systems in the Canonical Ensemble: A Review of Self-Guided Langevin Dynamics

    PubMed Central

    Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R.

    2013-01-01

    This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion “borrows” energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991

  20. Estimation of genetic variance for macro- and micro-environmental sensitivity using double hierarchical generalized linear models

    PubMed Central

    2013-01-01

    Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014

  1. Determination of real machine-tool settings and minimization of real surface deviation by computerized inspection

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Kuan, Chihping; Zhang, YI

    1991-01-01

    A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.

  2. Non-contact measurement of helicopter device position in wind tunnels with the use of optical videogrammetry method

    NASA Astrophysics Data System (ADS)

    Kuruliuk, K. A.; Kulesh, V. P.

    2016-10-01

    An optical videogrammetry method using one digital camera for non-contact measurements of geometric shape parameters, position and motion of models and structural elements of aircraft in experimental aerodynamics was developed. The tests with the use of this method for measurement of six components (three linear and three angular ones) of real position of helicopter device in wind tunnel flow were conducted. The distance between camera and test object was 15 meters. It was shown in practice that, in the conditions of aerodynamic experiment instrumental measurement error (standard deviation) for angular and linear displacements of helicopter device does not exceed 0,02° and 0.3 mm, respectively. Analysis of the results shows that at the minimum rotor thrust deviations are systematic and generally are within ± 0.2 degrees. Deviations of angle values grow with the increase of rotor thrust.

  3. 48 CFR 1.401 - Definition.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Definition. 1.401 Section... ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 1.401 Definition. Deviation means any one or... definition in 2.101), contract clause (see definition in 2.101), method, or practice of conducting...

  4. 48 CFR 1.401 - Definition.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Definition. 1.401 Section... ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 1.401 Definition. Deviation means any one or... definition in 2.101), contract clause (see definition in 2.101), method, or practice of conducting...

  5. 48 CFR 1.401 - Definition.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Definition. 1.401 Section... ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 1.401 Definition. Deviation means any one or... definition in 2.101), contract clause (see definition in 2.101), method, or practice of conducting...

  6. 48 CFR 1.401 - Definition.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Definition. 1.401 Section... ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 1.401 Definition. Deviation means any one or... definition in 2.101), contract clause (see definition in 2.101), method, or practice of conducting...

  7. Clustering biomolecular complexes by residue contacts similarity.

    PubMed

    Rodrigues, João P G L M; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J

    2012-07-01

    Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions. Albeit widely used, these measures suffer from several theoretical and technical limitations (e.g., choice of regions for fitting) that impair their application in multicomponent systems (N > 2), large-scale studies (e.g., interactomes), and other time-critical scenarios. We present here a simple similarity measure for structural clustering based on atomic contacts--the fraction of common contacts--and compare it with the most used similarity measure of the protein docking community--interface backbone RMSD. We show that this method produces very compact clusters in remarkably short time when applied to a collection of binary and multicomponent protein-protein and protein-DNA complexes. Furthermore, it allows easy clustering of similar conformations of multicomponent symmetrical assemblies in which chain permutations can occur. Simple contact-based metrics should be applicable to other structural biology clustering problems, in particular for time-critical or large-scale endeavors. Copyright © 2012 Wiley Periodicals, Inc.

  8. Automatic variance analysis of multistage care pathways.

    PubMed

    Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T

    2014-01-01

    A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality.

  9. Robust optimization of the billet for isothermal local loading transitional region of a Ti-alloy rib-web component based on dual-response surface method

    NASA Astrophysics Data System (ADS)

    Wei, Ke; Fan, Xiaoguang; Zhan, Mei; Meng, Miao

    2018-03-01

    Billet optimization can greatly improve the forming quality of the transitional region in the isothermal local loading forming (ILLF) of large-scale Ti-alloy ribweb components. However, the final quality of the transitional region may be deteriorated by uncontrollable factors, such as the manufacturing tolerance of the preforming billet, fluctuation of the stroke length, and friction factor. Thus, a dual-response surface method (RSM)-based robust optimization of the billet was proposed to address the uncontrollable factors in transitional region of the ILLF. Given that the die underfilling and folding defect are two key factors that influence the forming quality of the transitional region, minimizing the mean and standard deviation of the die underfilling rate and avoiding folding defect were defined as the objective function and constraint condition in robust optimization. Then, the cross array design was constructed, a dual-RSM model was established for the mean and standard deviation of the die underfilling rate by considering the size parameters of the billet and uncontrollable factors. Subsequently, an optimum solution was derived to achieve the robust optimization of the billet. A case study on robust optimization was conducted. Good results were attained for improving the die filling and avoiding folding defect, suggesting that the robust optimization of the billet in the transitional region of the ILLF was efficient and reliable.

  10. Estimation of Radiative Efficiency of Chemicals with Potentially Significant Global Warming Potential

    EPA Pesticide Factsheets

    The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of

  11. Control of friction at the nanoscale

    DOEpatents

    Barhen, Jacob; Braiman, Yehuda Y.; Protopopescu, Vladimir

    2010-04-06

    Methods and apparatus are described for control of friction at the nanoscale. A method of controlling frictional dynamics of a plurality of particles using non-Lipschitzian control includes determining an attribute of the plurality of particles; calculating an attribute deviation by subtracting the attribute of the plurality of particles from a target attribute; calculating a non-Lipschitzian feedback control term by raising the attribute deviation to a fractionary power .xi.=(2m+1)/(2n+1) where n=1, 2, 3 . . . and m=0, 1, 2, 3 . . . , with m strictly less than n and then multiplying by a control amplitude; and imposing the non-Lipschitzian feedback control term globally on each of the plurality of particles; imposing causes a subsequent magnitude of the attribute deviation to be reduced.

  12. Identifying large scale structures at 1 AU using fluctuations and wavelets

    NASA Astrophysics Data System (ADS)

    Niembro, T.; Lara, A.

    2016-12-01

    The solar wind (SW) is inhomogeneous and it is dominated for two types of flows: one quasi-stationary and one related to large scale transients (such as coronal mass ejections and co-rotating interaction regions). The SW inhomogeneities can be study as fluctuations characterized by a wide range of length and time scales. We are interested in the study of the characteristic fluctuations caused by large scale transient events. To do so, we define the vector space F with the normalized moving monthly/annual deviations as the orthogonal basis. Then, we compute the norm in this space of the solar wind parameters (velocity, magnetic field, density and temperature) fluctuations using WIND data from August 1992 to August 2015. This norm gives important information about the presence of a large structure disturbance in the solar wind and by applying a wavelet transform to this norm, we are able to determine, without subjectivity, the duration of the compression regions of these large transient structures and, even more, to identify if the structure corresponds to a single or complex (or merged) event. With this method we have automatically detected most of the events identified and published by other authors.

  13. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  14. A centrifugation-based physicochemical characterization method for the interaction between proteins and nanoparticles

    NASA Astrophysics Data System (ADS)

    Bekdemir, Ahmet; Stellacci, Francesco

    2016-10-01

    Nanomedicine requires in-depth knowledge of nanoparticle-protein interactions. These interactions are studied with methods limited to large or fluorescently labelled nanoparticles as they rely on scattering or fluorescence-correlation signals. Here, we have developed a method based on analytical ultracentrifugation (AUC) as an absorbance-based, label-free tool to determine dissociation constants (KD), stoichiometry (Nmax), and Hill coefficient (n), for the association of bovine serum albumin (BSA) with gold nanoparticles. Absorption at 520 nm in AUC renders the measurements insensitive to unbound and aggregated proteins. Measurements remain accurate and do not become more challenging for small (sub-10 nm) nanoparticles. In AUC, frictional ratio analysis allows for the qualitative assessment of the shape of the analyte. Data suggests that small-nanoparticles/protein complexes significantly deviate from a spherical shape even at maximum coverage. We believe that this method could become one of the established approaches for the characterization of the interaction of (small) nanoparticles with proteins.

  15. Optimization methods of pulse-to-pulse alignment using femtosecond pulse laser based on temporal coherence function for practical distance measurement

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui

    2018-02-01

    An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.

  16. Sample size calculation in economic evaluations.

    PubMed

    Al, M J; van Hout, B A; Michel, B C; Rutten, F F

    1998-06-01

    A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.

  17. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    PubMed Central

    HosseiniAliabadi, S. J.; Hosseini Pooya, S. M.; Afarideh, H.; Mianji, F.

    2015-01-01

    Introduction The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion This system can be utilized in large scale environmental monitoring with a higher accuracy. PMID:26157729

  18. Blind decomposition of Herschel-HIFI spectral maps of the NGC 7023 nebula

    NASA Astrophysics Data System (ADS)

    Berné, O.; Joblin, C.; Deville, Y.; Pilleri, P.; Pety, J.; Teyssier, D.; Gerin, M.; Fuente, A.

    2012-12-01

    Large spatial-spectral surveys are more and more common in astronomy. This calls for the need of new methods to analyze such mega- to giga-pixel data-cubes. In this paper we present a method to decompose such observations into a limited and comprehensive set of components. The original data can then be interpreted in terms of linear combinations of these components. The method uses non-negative matrix factorization (NMF) to extract latent spectral end-members in the data. The number of needed end-members is estimated based on the level of noise in the data. A Monte-Carlo scheme is adopted to estimate the optimal end-members, and their standard deviations. Finally, the maps of linear coefficients are reconstructed using non-negative least squares. We apply this method to a set of hyperspectral data of the NGC 7023 nebula, obtained recently with the HIFI instrument onboard the Herschel space observatory, and provide a first interpretation of the results in terms of 3-dimensional dynamical structure of the region.

  19. Voltage collapse in complex power grids

    PubMed Central

    Simpson-Porco, John W.; Dörfler, Florian; Bullo, Francesco

    2016-01-01

    A large-scale power grid's ability to transfer energy from producers to consumers is constrained by both the network structure and the nonlinear physics of power flow. Violations of these constraints have been observed to result in voltage collapse blackouts, where nodal voltages slowly decline before precipitously falling. However, methods to test for voltage collapse are dominantly simulation-based, offering little theoretical insight into how grid structure influences stability margins. For a simplified power flow model, here we derive a closed-form condition under which a power network is safe from voltage collapse. The condition combines the complex structure of the network with the reactive power demands of loads to produce a node-by-node measure of grid stress, a prediction of the largest nodal voltage deviation, and an estimate of the distance to collapse. We extensively test our predictions on large-scale systems, highlighting how our condition can be leveraged to increase grid stability margins. PMID:26887284

  20. Analysis of polycyclic aromatic hydrocarbons in water and beverages using membrane-assisted solvent extraction in combination with large volume injection-gas chromatography-mass spectrometric detection.

    PubMed

    Rodil, Rosario; Schellin, Manuela; Popp, Peter

    2007-09-07

    Membrane-assisted solvent extraction (MASE) in combination with large volume injection-gas chromatography-mass spectrometry (LVI-GC-MS) was applied for the determination of 16 polycyclic aromatic hydrocarbons (PAHs) in aqueous samples. The MASE conditions were optimized for achieving high enrichment of the analytes from aqueous samples, in terms of extraction conditions (shaking speed, extraction temperature and time), extraction solvent and composition (ionic strength, sample pH and presence of organic solvent). Parameters like linearity and reproducibility of the procedure were determined. The extraction efficiency was above 65% for all the analytes and the relative standard deviation (RSD) for five consecutive extractions ranged from 6 to 18%. At optimized conditions detection limits at the ng/L level were achieved. The effectiveness of the method was tested by analyzing real samples, such as river water, apple juice, red wine and milk.

  1. Inclusive Search for Boosted Higgs Bosons Using H$$ \\rightarrow \\mathrm{b\\overline{b}}$$ Decays with the CMS Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vernieri, Caterina

    We present the first search for the standard model Higgs boson (H) produced with large transverse momentum (more » $$\\mathrm{p_{T}}$$) via gluon fusion and decaying to a bottom quark-antiquark pair ($$\\mathrm{b\\overline{b}}$$). The search is performed using a data set of pp collisions at $$\\sqrt{s}=13$$ TeV collected with the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb$$^{\\mathrm{-1}}$$. A highly Lorentz-boosted Higgs boson decaying to $$\\mathrm{b\\overline{b}}$$ is reconstructed as a single, large radius jet and is identified using jet substructure and dedicated b tagging techniques. The method is validated with the first observation of the Z$$\\rightarrow\\mathrm{b\\overline{b}}$$ process in the single-jet topology, with a local significance of 5.1 standard deviations (5.8 expected).« less

  2. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  3. Inclusive Search for a Highly Boosted Higgs Boson Decaying to a Bottom Quark-Antiquark Pair

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirunyan, A. M.; Tumasyan, A.; Adam, W.

    An inclusive search for the standard model Higgs boson (more » $$\\mathrm{H}$$) produced with large transverse momentum ($$p_\\mathrm{T}$$) and decaying to a bottom quark-antiquark pair ($$\\mathrm{b}\\overline{\\mathrm{b}}$$) is performed using a data set of pp collisions at $$\\sqrt{s}=$$ 13 TeV collected with the CMS experiment at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb$$^{-1}$$. A highly Lorentz-boosted Higgs boson decaying to $$\\mathrm{b}\\overline{\\mathrm{b}}$$ is reconstructed as a single, large radius jet and is identified using jet substructure and dedicated $$\\mathrm{b}$$ tagging techniques. The method is validated with $$\\mathrm{Z}\\to\\mathrm{b}\\overline{\\mathrm{b}}$$ decays. The $$\\mathrm{Z}\\to\\mathrm{b}\\overline{\\mathrm{b}}$$ process is observed for the first time in the single-jet topology with a local significance of 5.1 standard deviations (5.8 expected). For a Higgs boson mass of 125 GeV, an excess of events above the expected background is observed (expected) with a local significance of 1.5 (0.7) standard deviations. The measured cross section times branching fraction for production via gluon fusion of $$\\mathrm{H} \\rightarrow \\mathrm{b}\\overline{\\mathrm{b}}$$ with $$p_\\mathrm{T}>$$450 GeV and in the pseudorapidity range $-$2.5 $$< \\eta <$$ 2.5 is 74$$\\pm$$48 (stat) $$_{-10}^{+17}$$ (syst) fb, which is consistent within uncertainties with the standard model prediction.« less

  4. Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate

    NASA Astrophysics Data System (ADS)

    Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef

    2016-04-01

    The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.

  5. Inclusive Search for a Highly Boosted Higgs Boson Decaying to a Bottom Quark-Antiquark Pair

    DOE PAGES

    Sirunyan, A. M.; Tumasyan, A.; Adam, W.; ...

    2018-02-14

    An inclusive search for the standard model Higgs boson (more » $$\\mathrm{H}$$) produced with large transverse momentum ($$p_\\mathrm{T}$$) and decaying to a bottom quark-antiquark pair ($$\\mathrm{b}\\overline{\\mathrm{b}}$$) is performed using a data set of pp collisions at $$\\sqrt{s}=$$ 13 TeV collected with the CMS experiment at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb$$^{-1}$$. A highly Lorentz-boosted Higgs boson decaying to $$\\mathrm{b}\\overline{\\mathrm{b}}$$ is reconstructed as a single, large radius jet and is identified using jet substructure and dedicated $$\\mathrm{b}$$ tagging techniques. The method is validated with $$\\mathrm{Z}\\to\\mathrm{b}\\overline{\\mathrm{b}}$$ decays. The $$\\mathrm{Z}\\to\\mathrm{b}\\overline{\\mathrm{b}}$$ process is observed for the first time in the single-jet topology with a local significance of 5.1 standard deviations (5.8 expected). For a Higgs boson mass of 125 GeV, an excess of events above the expected background is observed (expected) with a local significance of 1.5 (0.7) standard deviations. The measured cross section times branching fraction for production via gluon fusion of $$\\mathrm{H} \\rightarrow \\mathrm{b}\\overline{\\mathrm{b}}$$ with $$p_\\mathrm{T}>$$450 GeV and in the pseudorapidity range $-$2.5 $$< \\eta <$$ 2.5 is 74$$\\pm$$48 (stat) $$_{-10}^{+17}$$ (syst) fb, which is consistent within uncertainties with the standard model prediction.« less

  6. A model of curved saccade trajectories: spike rate adaptation in the brainstem as the cause of deviation away.

    PubMed

    Kruijne, Wouter; Van der Stigchel, Stefan; Meeter, Martijn

    2014-03-01

    The trajectory of saccades to a target is often affected whenever there is a distractor in the visual field. Distractors can cause a saccade to deviate towards their location or away from it. The oculomotor mechanisms that produce deviation towards distractors have been thoroughly explored in behavioral, neurophysiological and computational studies. The mechanisms underlying deviation away, on the other hand, remain unclear. Behavioral findings suggest a mechanism of spatially focused, top-down inhibition in a saccade map, and deviation away has become a tool to investigate such inhibition. However, this inhibition hypothesis has little neuroanatomical or neurophysiological support, and recent findings go against it. Here, we propose that deviation away results from an unbalanced saccade drive from the brainstem, caused by spike rate adaptation in brainstem long-lead burst neurons. Adaptation to stimulation in the direction of the distractor results in an unbalanced drive away from it. An existing model of the saccade system was extended with this theory. The resulting model simulates a wide range of findings on saccade trajectories, including findings that have classically been interpreted to support inhibition views. Furthermore, the model replicated the effect of saccade latency on deviation away, but predicted this effect would be absent with large (400 ms) distractor-target onset asynchrony. This prediction was confirmed in an experiment, which demonstrates that the theory both explains classical findings on saccade trajectories and predicts new findings. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Non-specific filtering of beta-distributed data.

    PubMed

    Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D

    2014-06-19

    Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered.

  8. Improving IQ measurement in intellectual disabilities using true deviation from population norms.

    PubMed

    Sansone, Stephanie M; Schneider, Andrea; Bickel, Erika; Berry-Kravis, Elizabeth; Prescott, Christina; Hessl, David

    2014-01-01

    Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment.

  9. Matching OPC and masks on 300-mm lithography tools utilizing variable illumination settings

    NASA Astrophysics Data System (ADS)

    Palitzsch, Katrin; Kubis, Michael; Schroeder, Uwe P.; Schumacher, Karl; Frangen, Andreas

    2004-05-01

    CD control is crucial to maximize product yields on 300mm wafers. This is particularly true for DRAM frontend lithography layers, like gate level, and deep trench (capacitor) level. In the DRAM process, large areas of the chip are taken up by array structures, which are difficult to structure due to aggressive pitch requirements. Consequently, the lithography process is centered such that the array structures are printed on target. Optical proximity correction is applied to print gate level structures in the periphery circuitry on target. Only slight differences of the different Zernike terms can cause rather large variations of the proximity curves, resulting in a difference of isolated and semi-isolated lines printed on different tools. If the deviations are too large, tool specific OPC is needed. The same is true for deep trench level, where the length to width ratio of elongated contact-like structures is an important parameter to adjust the electrical properties of the chip. Again, masks with specific biases for tools with different Zernikes are needed to optimize product yield. Additionally, mask making contributes to the CD variation of the process. Theoretically, the CD deviation caused by an off-centered mask process can easily eat up the majority of the CD budget of a lithography process. In practice, masks are very often distributed intelligently among production tools, such that lens and mask effects cancel each other. However, only dose adjusting and mask allocation may still result in a high CD variation with large systematical contributions. By adjusting the illumination settings, we have successfully implemented a method to reduce CD variation on our advanced processes. Especially inner and outer sigma for annular illumination, and the numerical aperture, can be optimized to match mask and stepper properties. This process will be shown to overcome slight lens and mask differences effectively. The effects on lithography process windows have to be considered, nonetheless.

  10. Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory

    NASA Astrophysics Data System (ADS)

    Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre

    2016-05-01

    Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.

  11. Sample Selection for Training Cascade Detectors.

    PubMed

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  12. Resonance vibrations in intake and exhaust pipes of in-line engines III : the inlet process of a four-stroke-cycle engine

    NASA Technical Reports Server (NTRS)

    Lutz, O

    1940-01-01

    Using a previously developed method, the boundary process of four-stroke-cycle engines are set up. The results deviate considerably from those obtained under the assumption that the velocity fluctuation is proportional to the cylinder piston motion. The deviation is less at the position of resonance frequencies. By the method developed, the effect of the resonance vibrations on the volumetric efficiency can be demonstrated.

  13. [Study of building quantitative analysis model for chlorophyll in winter wheat with reflective spectrum using MSC-ANN algorithm].

    PubMed

    Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui

    2010-01-01

    Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.

  14. Radar sea reflection for low-e targets

    NASA Astrophysics Data System (ADS)

    Chow, Winston C.; Groves, Gordon W.

    1998-09-01

    Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.

  15. Recursive utility in a Markov environment with stochastic growth

    PubMed Central

    Hansen, Lars Peter; Scheinkman, José A.

    2012-01-01

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428

  16. Shapes of strong shock fronts in an inhomogeneous solar wind

    NASA Technical Reports Server (NTRS)

    Heinemann, M. A.; Siscoe, G. L.

    1974-01-01

    The shapes expected for solar-flare-produced strong shock fronts in the solar wind have been calculated, large-scale variations in the ambient medium being taken into account. It has been shown that for reasonable ambient solar wind conditions the mean and the standard deviation of the east-west shock normal angle are in agreement with experimental observations including shocks of all strengths. The results further suggest that near a high-speed stream it is difficult to distinguish between corotating shocks and flare-associated shocks on the basis of the shock normal alone. Although the calculated shapes are outside the range of validity of the linear approximation, these results indicate that the variations in the ambient solar wind may account for large deviations of shock normals from the radial direction.

  17. Excitation laser energy dependence of surface-enhanced fluorescence showing plasmon-induced ultrafast electronic dynamics in dye molecules

    NASA Astrophysics Data System (ADS)

    Itoh, Tamitake; Yamamoto, Yuko S.; Tamaru, Hiroharu; Biju, Vasudevanpillai; Murase, Norio; Ozaki, Yukihiro

    2013-06-01

    We find unique properties accompanying surface-enhanced fluorescence (SEF) from dye molecules adsorbed on Ag nanoparticle aggregates, which generate surface-enhanced Raman scattering. The properties are observed in excitation laser energy dependence of SEF after excluding plasmonic spectral modulation in SEF. The unique properties are large blue shifts of fluorescence spectra, deviation of ratios between anti-Stokes SEF intensity and Stokes from those of normal fluorescence, super-broadening of Stokes spectra, and returning to original fluorescence by lower energy excitation. We elucidate that these properties are induced by electromagnetic enhancement of radiative decay rates exceeding the vibrational relaxation rates within an electronic excited state, which suggests that molecular electronic dynamics in strong plasmonic fields can be largely deviated from that in free space.

  18. Large deviation analysis of a simple information engine

    NASA Astrophysics Data System (ADS)

    Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.

    2015-11-01

    Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.

  19. Recursive utility in a Markov environment with stochastic growth.

    PubMed

    Hansen, Lars Peter; Scheinkman, José A

    2012-07-24

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

  20. Comparison of age estimation between 15-25 years using a modified form of Demirjian’s ten stage method and two teeth regression formula

    NASA Astrophysics Data System (ADS)

    Amiroh; Priaminiarti, M.; Syahraini, S. I.

    2017-08-01

    Age estimation of individuals, both dead and living, is important for victim identification and legal certainty. The Demirjian method uses the third molar for age estimation of individuals above 15 years old. The aim is to compare age estimation between 15-25 years using two Demirjian methods. Development stage of third molars in panoramic radiographs of 50 male and female samples were assessed by two observers using Demirjian’s ten stages and two teeth regression formula. Reliability was calculated using Cohen’s kappa coefficient and the significance of the observations was obtained from Wilcoxon tests. Deviations of age estimation were calculated using various methods. The deviation of age estimation with the two teeth regression formula was ±1.090 years; with ten stages, it was ±1.191 years. The deviation of age estimation using the two teeth regression formula was less than with the ten stages method. The age estimations using the two teeth regression formula or the ten stages method are significantly different until the age of 25, but they can be applied up to the age of 22.

  1. Comparison of estimators of standard deviation for hydrologic time series

    USGS Publications Warehouse

    Tasker, Gary D.; Gilroy, Edward J.

    1982-01-01

    Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.

  2. Borehole deviation and correction factor data for selected wells in the eastern Snake River Plain aquifer at and near the Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Twining, Brian V.

    2016-11-29

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Energy, has maintained a water-level monitoring program at the Idaho National Laboratory (INL) since 1949. The purpose of the program is to systematically measure and report water-level data to assess the eastern Snake River Plain aquifer and long term changes in groundwater recharge, discharge, movement, and storage. Water-level data are commonly used to generate potentiometric maps and used to infer increases and (or) decreases in the regional groundwater system. Well deviation is one component of water-level data that is often overlooked and is the result of the well construction and the well not being plumb. Depending on measured slant angle, where well deviation generally increases linearly with increasing slant angle, well deviation can suggest artificial anomalies in the water table. To remove the effects of well deviation, the USGS INL Project Office applies a correction factor to water-level data when a well deviation survey indicates a change in the reference elevation of greater than or equal to 0.2 ft.Borehole well deviation survey data were considered for 177 wells completed within the eastern Snake River Plain aquifer, but not all wells had deviation survey data available. As of 2016, USGS INL Project Office database includes: 57 wells with gyroscopic survey data; 100 wells with magnetic deviation survey data; 11 wells with erroneous gyroscopic data that were excluded; and, 68 wells with no deviation survey data available. Of the 57 wells with gyroscopic deviation surveys, correction factors for 16 wells ranged from 0.20 to 6.07 ft and inclination angles (SANG) ranged from 1.6 to 16.0 degrees. Of the 100 wells with magnetic deviation surveys, a correction factor for 21 wells ranged from 0.20 to 5.78 ft and SANG ranged from 1.0 to 13.8 degrees, not including the wells that did not meet the correction factor criteria of greater than or equal to 0.20 ft.Forty-seven wells had gyroscopic and magnetic deviation survey data for the same well. Datasets for both survey types were compared for the same well to determine whether magnetic survey data were consistent with gyroscopic survey data. Of those 47 wells, 96 percent showed similar correction factor estimates (≤ 0.20 ft) for both magnetic and gyroscopic well deviation surveys. A linear comparison of correction factor estimates for both magnetic and gyroscopic deviation well surveys for all 47 wells indicate good linear correlation, represented by an r-squared of 0.88. The correction factor difference between the gyroscopic and magnetic surveys for 45 of 47 wells ranged from 0.00 to 0.18 ft, not including USGS 57 and USGS 125. Wells USGS 57 and USGS 125 show a correction factor difference of 2.16 and 0.36 ft, respectively; however, review of the data files suggest erroneous SANG data for both magnetic deviation well surveys. The difference in magnetic and gyroscopic well deviation SANG measurements, for all wells, ranged from 0.0 to 0.9 degrees. These data indicate good agreement between SANG data measured using the magnetic deviation survey methods and SANG data measured using gyroscopic deviation survey methods, even for surveys collected years apart.

  3. Quality requirements for veterinary hematology analyzers in small animals-a survey about veterinary experts' requirements and objective evaluation of analyzer performance based on a meta-analysis of method validation studies: bench top hematology analyzer.

    PubMed

    Cook, Andrea M; Moritz, Andreas; Freeman, Kathleen P; Bauer, Natali

    2016-09-01

    Scarce information exists about quality requirements and objective evaluation of performance of large veterinary bench top hematology analyzers. The study was aimed at comparing the observed total error (TEobs ) derived from meta-analysis of published method validation data to the total allowable error (TEa ) for veterinary hematology variables in small animals based on experts' opinions. Ideally, TEobs should be < TEa . An online survey was sent to veterinary experts in clinical pathology and small animal internal medicine for providing the maximal allowable deviation from a given result for each variable. Percent of TEa = (allowable median deviation/clinical threshold) * 100%. Second, TEobs for 3 laser-based bench top hematology analyzers (ADVIA 2120; Sysmex XT2000iV, and CellDyn 3500) was calculated based on method validation studies published between 2005 and 2013 (n = 4). The percent TEobs = 2 * CV (%) + bias (%). The CV was derived from published studies except for the ADVIA 2120 (internal data), and bias was estimated from the regression equation. A total of 41 veterinary experts (19 diplomates, 8 residents, 10 postgraduate students, 4 anonymous specialists) responded. The proposed range of TEa was wide, but generally ≤ 20%. The TEobs was < TEa for all variables and analyzers except for canine and feline HGB (high bias, low CV) and platelet counts (high bias, high CV). Overall, veterinary bench top analyzers fulfilled experts' requirements except for HGB due to method-related bias, and platelet counts due to known preanalytic/analytic issues. © 2016 American Society for Veterinary Clinical Pathology.

  4. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    PubMed

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  5. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  6. On the influence of airfoil deviations on the aerodynamic performance of wind turbine rotors

    NASA Astrophysics Data System (ADS)

    Winstroth, J.; Seume, J. R.

    2016-09-01

    The manufacture of large wind turbine rotor blades is a difficult task that still involves a certain degree of manual labor. Due to the complexity, airfoil deviations between the design airfoils and the manufactured blade are certain to arise. Presently, the understanding of the impact of manufacturing uncertainties on the aerodynamic performance is still incomplete. The present work analyzes the influence of a series of airfoil deviations likely to occur during manufacturing by means of Computational Fluid Dynamics and the aeroelastic code FAST. The average power production of the NREL 5MW wind turbine is used to evaluate the different airfoil deviations. Analyzed deviations include: Mold tilt towards the leading and trailing edge, thick bond lines, thick bond lines with cantilever correction, backward facing steps and airfoil waviness. The most severe influences are observed for mold tilt towards the leading and thick bond lines. By applying the cantilever correction, the influence of thick bond lines is almost compensated. Airfoil waviness is very dependent on amplitude height and the location along the surface of the airfoil. Increased influence is observed for backward facing steps, once they are high enough to trigger boundary layer transition close to the leading edge.

  7. Models of Lift and Drag Coefficients of Stalled and Unstalled Airfoils in Wind Turbines and Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    2008-01-01

    Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.

  8. Characterization of cardiac quiescence from retrospective cardiac computed tomography using a correlation-based phase-to-phase deviation measure

    PubMed Central

    Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.; Auffermann, William F.; Henry, Travis S.; Khosa, Faisal; Coy, Adam M.; Tridandapani, Srini

    2015-01-01

    Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as well as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (PAGG) and IVS (PIV S) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (PCT). The one exception was the RCA, which improved for PAGG for 18 of the 20 subjects when compared to PCT (PCT = 2.48; PAGG = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality. PMID:25652511

  9. Characterization of cardiac quiescence from retrospective cardiac computed tomography using a correlation-based phase-to-phase deviation measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.

    2015-02-15

    Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as wellmore » as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (P{sub AGG}) and IVS (P{sub IV} {sub S}) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (P{sub CT}). The one exception was the RCA, which improved for P{sub AGG} for 18 of the 20 subjects when compared to P{sub CT} (P{sub CT} = 2.48; P{sub AGG} = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality.« less

  10. Multi-temporal thermal analyses for submarine groundwater discharge (SGD) detection over large spatial scales in the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hennig, Hanna; Mallast, Ulf; Merz, Ralf

    2015-04-01

    Submarine groundwater discharge (SGD) sites act as important pathways for nutrients and contaminants that deteriorate marine ecosystems. In the Mediterranean it is estimated that 75% of freshwater input is contributed from karst aquifers. Thermal remote sensing can be used for a pre-screening of potential SGD sites in order to optimize field surveys. Although different platforms (ground-, air- and spaceborne) may serve for thermal remote sensing, the most cost-effective are spaceborne platforms (satellites) that likewise cover the largest spatial scale (>100 km per image). Therefore an automatized and objective approach that uses thermal satellite images from Landsat 7 and Landsat 8 was used to localize potential SGD sites on a large spatial scale. The method using descriptive statistic parameter specially range and standard deviation by (Mallast et al., 2014) was adapted to the Mediterranean Sea. Since the method was developed for the Dead Sea were satellite images with cloud cover are rare and no sea level change occurs through tidal cycles it was essential to adapt the method to a region where tidal cycles occur and cloud cover is more frequent . These adaptations include: (1) an automatic and adaptive coastline detection (2) include and process cloud covered scenes to enlarge the data basis, (3) implement tidal data in order to analyze low tide images as SGD is enhanced during these phases and (4) test the applicability for Landsat 8 images that will provide data in the future once Landsat 7 stops working. As previously shown, the range method shows more accurate results compared to the standard deviation. However, the result exclusively depends on two scenes (minimum and maximum) and is largely influenced by outliers. Counteracting on this drawback we developed a new approach. Since it is assumed that sea surface temperature (SST) is stabilized by groundwater at SGD sites, the slope of a bootstrapped linear model fitted to sorted SST per pixel would be less steep than the slope of the surrounding area, resulting in less influence through outliers and an equal weighting of all integrated scenes. Both methods could be used to detect SGD sites in the Mediterranean regardless to the discharge characteristics (diffuse and focused) exceptions are sites with deep emergences. Better results could be shown in bays compared to more exposed sites. Since the range of the SST is mostly influenced by maximum and minimum of the scenes, the slope approach can be seen as a more representative method using all scenes. References: Mallast, U., Gloaguen, R., Friesen, J., Rödiger, T., Geyer, S., Merz, R., Siebert, C., 2014. How to identify groundwater-caused thermal anomalies in lakes based on multi-temporal satellite data in semi-arid regions. Hydrol. Earth Syst. Sci. 18 (7), 2773-2787.

  11. Regional snow-avalanche detection using object-based image analysis of near-infrared aerial imagery

    NASA Astrophysics Data System (ADS)

    Korzeniowska, Karolina; Bühler, Yves; Marty, Mauro; Korup, Oliver

    2017-10-01

    Snow avalanches are destructive mass movements in mountain regions that continue to claim lives and cause infrastructural damage and traffic detours. Given that avalanches often occur in remote and poorly accessible steep terrain, their detection and mapping is extensive and time consuming. Nonetheless, systematic avalanche detection over large areas could help to generate more complete and up-to-date inventories (cadastres) necessary for validating avalanche forecasting and hazard mapping. In this study, we focused on automatically detecting avalanches and classifying them into release zones, tracks, and run-out zones based on 0.25 m near-infrared (NIR) ADS80-SH92 aerial imagery using an object-based image analysis (OBIA) approach. Our algorithm takes into account the brightness, the normalised difference vegetation index (NDVI), the normalised difference water index (NDWI), and its standard deviation (SDNDWI) to distinguish avalanches from other land-surface elements. Using normalised parameters allows applying this method across large areas. We trained the method by analysing the properties of snow avalanches at three 4 km-2 areas near Davos, Switzerland. We compared the results with manually mapped avalanche polygons and obtained a user's accuracy of > 0.9 and a Cohen's kappa of 0.79-0.85. Testing the method for a larger area of 226.3 km-2, we estimated producer's and user's accuracies of 0.61 and 0.78, respectively, with a Cohen's kappa of 0.67. Detected avalanches that overlapped with reference data by > 80 % occurred randomly throughout the testing area, showing that our method avoids overfitting. Our method has potential for large-scale avalanche mapping, although further investigations into other regions are desirable to verify the robustness of our selected thresholds and the transferability of the method.

  12. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  13. Identifying specific erotic cues in sexual deviations by audiotaped descriptions.

    PubMed Central

    Abel, G G; Blanchard, E B; Barlow, D H; Mavissakalian, M

    1975-01-01

    Using audiotaped descriptions of sexual experiences and a direct measure of penile erection, it is possible to specify more precisely erotic cues in sexual deviates. Results indicated that such cues are highly idiosyncratic. Some tentative conclusions and suggested application for the method are discussed. PMID:1184490

  14. Efficiency of thin magnetically arrested discs around black holes

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; McKinney, Jonathan C.; Reynolds, Christopher S.

    2016-10-01

    The radiative and jet efficiencies of thin magnetized accretion discs around black holes (BHs) are affected by BH spin and the presence of a magnetic field that, when strong, could lead to large deviations from Novikov-Thorne (NT) thin disc theory. To seek the maximum deviations, we perform general relativistic magnetohydrodynamic simulations of radiatively efficient thin (half-height H to radius R of H/R ≈ 0.10) discs around moderately rotating BHs with a/M = 0.5. First, our simulations, each evolved for more than 70 000 rg/c (gravitational radius rg and speed of light c), show that large-scale magnetic field readily accretes inward even through our thin disc and builds-up to the magnetically arrested disc (MAD) state. Secondly, our simulations of thin MADs show the disc achieves a radiative efficiency of ηr ≈ 15 per cent (after estimating photon capture), which is about twice the NT value of ηr ˜ 8 per cent for a/M = 0.5 and gives the same luminosity as an NT disc with a/M ≈ 0.9. Compared to prior simulations with ≲10 per cent deviations, our result of an ≈80 per cent deviation sets a new benchmark. Building on prior work, we are now able to complete an important scaling law which suggests that observed jet quenching in the high-soft state in BH X-ray binaries is consistent with an ever-present MAD state with a weak yet sustained jet.

  15. Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.

    PubMed

    Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman

    2013-02-01

    This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.

  16. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  17. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  18. Comparison of different methods for the in situ measurement of forest litter moisture content

    NASA Astrophysics Data System (ADS)

    Schunk, C.; Ruth, B.; Leuchner, M.; Wastl, C.; Menzel, A.

    2016-02-01

    Dead fine fuel (e.g., litter) moisture content is an important parameter for both forest fire and ecological applications as it is related to ignitability, fire behavior and soil respiration. Real-time availability of this value would thus be a great benefit to fire risk management and prevention. However, the comprehensive literature review in this paper shows that there is no easy-to-use method for automated measurements available. This study investigates the applicability of four different sensor types (permittivity and electrical resistance measuring principles) for this measurement. Comparisons were made to manual gravimetric reference measurements carried out almost daily for one fire season and overall agreement was good (highly significant correlations with 0.792 < = r < = 0.947, p < 0.001). Standard deviations within sensor types were linearly correlated to daily sensor mean values; however, above a certain threshold they became irregular, which may be linked to exceedance of the working ranges. Thus, measurements with irregular standard deviations were considered unusable and relationships between gravimetric and automatic measurements of all individual sensors were compared only for useable periods. A large drift in these relationships became obvious from drought to drought period. This drift may be related to installation effects or settling and decomposition of the litter layer throughout the fire season. Because of the drift and the in situ calibration necessary, it cannot be recommended to use the methods presented here for monitoring purposes and thus operational hazard management. However, they may be interesting for scientific studies when some manual fuel moisture measurements are made anyway. Additionally, a number of potential methodological improvements are suggested.

  19. Improved ambiguity resolution for URTK with dynamic atmosphere constraints

    NASA Astrophysics Data System (ADS)

    Tang, Weiming; Liu, Wenjian; Zou, Xuan; Li, Zongnan; Chen, Liang; Deng, Chenlong; Shi, Chuang

    2016-12-01

    Raw observation processing method with prior knowledge of ionospheric delay could strengthen the ambiguity resolution (AR), but it does not make full use of the relatively longer wavelength of wide-lane (WL) observation. Furthermore, the accuracy of calculated atmospheric delays from the regional augmentation information has quite different in quality, while the atmospheric constraint used in the current methods is usually set to an empirical value. A proper constraint, which matches the accuracy of calculated atmospheric delays, can most effectively compensate the residual systematic biases caused by large inter-station distances. Therefore, the standard deviation of the residual atmospheric parameters should be fine-tuned. This paper presents an atmosphere-constrained AR method for undifferenced network RTK (URTK) rover, whose ambiguities are sequentially fixed according to their wavelengths. Furthermore, this research systematically analyzes the residual atmospheric error and finds that it mainly varies along the positional relationship between the rover and the chosen reference stations. More importantly, its ionospheric part of certain location will also be cyclically influenced every day. Therefore, the standard deviation of residual ionospheric error can be modeled by a daily repeated cosine or other functions with the help of data one day before, and applied by rovers as pseudo-observation. With the data collected at 29 stations from a continuously operating reference station network in Guangdong Province (GDCORS) in China, the efficiency of the proposed approach is confirmed by improving the success and error rates of AR for 10-20 % compared to that of the WL-L1-IF one, as well as making much better positioning accuracy.

  20. Relationship between chin deviation and the position and morphology of the mandible in individuals with a unilateral cleft lip and palate

    PubMed Central

    Kim, Kyung-Seon; Park, Soo-Byung; Kim, Seong-Sik; Kim, Yong-Il

    2013-01-01

    Objective In this study, we aimed to examine the relationship between chin deviation and the positional and morphological features of the mandible and to determine the factors that contributed to chin deviation in individuals with a unilateral cleft lip and palate (UCLP). Methods Cone-beam computed tomography (CBCT) images of 28 adults with UCLP were analyzed in this study. Segmented three-dimensional temporomandibular fossa and mandible images were reconstructed, and angular, linear, and volumetric parameters were measured. Results For all 28 individuals, the chin was found to deviate to the cleft side by 1.59 mm. Moreover, among these 28 individuals, only 7 showed distinct (more than 4 mm) chin deviation, which was toward the cleft side. Compared to the non-cleft side, the mandibular body length, frontal ramal inclination, and vertical position of the condyle were lower and inclination of the temporomandibular fossa was steeper on the cleft side. Furthermore, the differences in inclination of the temporomandibular fossa, mandibular body length, ramus length, and condylar volume ratio (non-deviated/deviated) were positively correlated with chin deviation. Conclusions UCLP individuals show mild chin deviation to the cleft side. Statistical differences were noted in the parameters that represented positional and morphological asymmetries of the mandible and temporomandibular fossa; however, these differences were too small to indicate clinical significance. PMID:24015386

  1. Error compensation of IQ modulator using two-dimensional DFT

    NASA Astrophysics Data System (ADS)

    Ohshima, Takashi; Maesaka, Hirokazu; Matsubara, Shinichi; Otake, Yuji

    2016-06-01

    It is important to precisely set and keep the phase and amplitude of an rf signal in the accelerating cavity of modern accelerators, such as an X-ray Free Electron Laser (XFEL) linac. In these accelerators an acceleration rf signal is generated or detected by an In-phase and Quadrature (IQ) modulator, or a demodulator. If there are any deviations of the phase and the amplitude from the ideal values, crosstalk between the phase and the amplitude of the output signal of the IQ modulator or the demodulator arises. This causes instability of the feedback controls that simultaneously stabilize both the rf phase and the amplitude. To compensate for such deviations, we developed a novel compensation method using a two-dimensional Discrete Fourier Transform (DFT). Because the observed deviations of the phase and amplitude of an IQ modulator involve sinusoidal and polynomial behaviors on the phase angle and the amplitude of the rf vector, respectively, the DFT calculation with these basis functions makes a good approximation with a small number of compensation coefficients. Also, we can suppress high-frequency noise components arising when we measure the deviation data. These characteristics have advantages compared to a Look Up Table (LUT) compensation method. The LUT method usually demands many compensation elements, such as about 300, that are not easy to treat. We applied the DFT compensation method to the output rf signal of a C-band IQ modulator at SACLA, which is an XFEL facility in Japan. The amplitude deviation of the IQ modulator after the DFT compensation was reduced from 15.0% at the peak to less than 0.2% at the peak for an amplitude control range of from 0.1 V to 0.9 V (1.0 V full scale) and for a phase control range from 0 degree to 360 degrees. The number of compensation coefficients is 60, which is smaller than that of the LUT method, and is easy to treat and maintain.

  2. The linear sizes tolerances and fits system modernization

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  3. System and Method for Outlier Detection via Estimating Clusters

    NASA Technical Reports Server (NTRS)

    Iverson, David J. (Inventor)

    2016-01-01

    An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.

  4. Ku-band radar threshold analysis

    NASA Technical Reports Server (NTRS)

    Weber, C. L.; Polydoros, A.

    1979-01-01

    The statistics of the CFAR threshold for the Ku-band radar was determined. Exact analytical results were developed for both the mean and standard deviations in the designated search mode. The mean value is compared to the results of a previously reported simulation. The analytical results are more optimistic than the simulation results, for which no explanation is offered. The normalized standard deviation is shown to be very sensitive to signal-to-noise ratio and very insensitive to the noise correlation present in the range gates of the designated search mode. The substantial variation in the CFAR threshold is dominant at large values of SNR where the normalized standard deviation is greater than 0.3. Whether or not this significantly affects the resulting probability of detection is a matter which deserves additional attention.

  5. [The uncertainty evaluation of analytical results of 27 elements in geological samples by X-ray fluorescence spectrometry].

    PubMed

    Wang, Yi-Ya; Zhan, Xiu-Chun

    2014-04-01

    Evaluating uncertainty of analytical results with 165 geological samples by polarized dispersive X-ray fluorescence spectrometry (P-EDXRF) has been reported according to the internationally accepted guidelines. One hundred sixty five pressed pellets of similar matrix geological samples with reliable values were analyzed by P-EDXRF. These samples were divided into several different concentration sections in the concentration ranges of every component. The relative uncertainties caused by precision and accuracy of 27 components were evaluated respectively. For one element in one concentration, the relative uncertainty caused by precision can be calculated according to the average value of relative standard deviation with different concentration level in one concentration section, n = 6 stands for the 6 results of one concentration level. The relative uncertainty caused by accuracy in one concentration section can be evaluated by the relative standard deviation of relative deviation with different concentration level in one concentration section. According to the error propagation theory, combining the precision uncertainty and the accuracy uncertainty into a global uncertainty, this global uncertainty acted as method uncertainty. This model of evaluating uncertainty can solve a series of difficult questions in the process of evaluating uncertainty, such as uncertainties caused by complex matrix of geological samples, calibration procedure, standard samples, unknown samples, matrix correction, overlap correction, sample preparation, instrument condition and mathematics model. The uncertainty of analytical results in this method can act as the uncertainty of the results of the similar matrix unknown sample in one concentration section. This evaluation model is a basic statistical method owning the practical application value, which can provide a strong base for the building of model of the following uncertainty evaluation function. However, this model used a lot of samples which cannot simply be applied to other types of samples with different matrix samples. The number of samples is too large to adapt to other type's samples. We will strive for using this study as a basis to establish a reasonable basis of mathematical statistics function mode to be applied to different types of samples.

  6. Large-deviation theory for diluted Wishart random matrices

    NASA Astrophysics Data System (ADS)

    Castillo, Isaac Pérez; Metz, Fernando L.

    2018-03-01

    Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.

  7. Rapidly rotating neutron stars with a massive scalar field—structure and universal relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doneva, Daniela D.; Yazadjiev, Stoytcho S., E-mail: daniela.doneva@uni-tuebingen.de, E-mail: yazad@phys.uni-sofia.bg

    We construct rapidly rotating neutron star models in scalar-tensor theories with a massive scalar field. The fact that the scalar field has nonzero mass leads to very interesting results since the allowed range of values of the coupling parameters is significantly broadened. Deviations from pure general relativity can be very large for values of the parameters that are in agreement with the observations. We found that the rapid rotation can magnify the differences several times compared to the static case. The universal relations between the normalized moment of inertia and quadrupole moment are also investigated both for the slowly andmore » rapidly rotating cases. The results show that these relations are still EOS independent up to a large extend and the deviations from pure general relativity can be large. This places the massive scalar-tensor theories amongst the few alternative theories of gravity that can be tested via the universal I -Love- Q relations.« less

  8. WKB theory of large deviations in stochastic populations

    NASA Astrophysics Data System (ADS)

    Assaf, Michael; Meerson, Baruch

    2017-06-01

    Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.

  9. Statistics of velocity fluctuations of Geldart A particles in a circulating fluidized bed riser

    DOE PAGES

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    2017-11-21

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  10. Comparison Of Methods Used In Cartography For The Skeletonisation Of Areal Objects

    NASA Astrophysics Data System (ADS)

    Szombara, Stanisław

    2015-12-01

    The article presents a method that would compare skeletonisation methods for areal objects. The skeleton of an areal object, being its linear representation, is used, among others, in cartographic visualisation. The method allows us to compare between any skeletonisation methods in terms of the deviations of distance differences between the skeleton of the object and its border from one side and the distortions of skeletonisation from another. In the article, 5 methods were compared: Voronoi diagrams, densified Voronoi diagrams, constrained Delaunay triangulation, Straight Skeleton and Medial Axis (Transform). The results of comparison were presented on the example of several areal objects. The comparison of the methods showed that in all the analysed objects the Medial Axis (Transform) gives the smallest distortion and deviation values, which allows us to recommend it.

  11. A study on the measurement of wrist motion range using the iPhone 4 gyroscope application.

    PubMed

    Kim, Tae Seob; Park, David Dae Hwan; Lee, Young Bae; Han, Dong Gil; Shim, Jeong Su; Lee, Young Jig; Kim, Peter Chan Woo

    2014-08-01

    Measuring the range of motion (ROM) of the wrist is an important physical examination conducted in the Department of Hand Surgery for the purpose of evaluation, diagnosis, prognosis, and treatment of patients. The most common method for performing this task is by using a universal goniometer. This study was performed using 52 healthy participants to compare wrist ROM measurement using a universal goniometer and the iPhone 4 Gyroscope application. Participants did not have previous wrist illnesses and their measured values for wrist motion were compared in each direction. Normal values for wrist ROM are 73 degrees of flexion, 71 degrees of extension, 19 degrees of radial deviation, 33 degrees of ulnar deviation, 140 degrees of supination, and 60 degrees of pronation.The average measurement values obtained using the goniometer were 74.2 (5.1) degrees for flexion, 71.1 (4.9) degrees for extension, 19.7 (3.0) degrees for radial deviation, 34.0 (3.7) degrees for ulnar deviation, 140.8 (5.6) degrees for supination, and 61.1 (4.7) degrees for pronation. The average measurement values obtained using the iPhone 4 Gyroscope application were 73.7 (5.5) degrees for flexion, 70.8 (5.1) degrees for extension, 19.5 (3.0) degrees for radial deviation, 33.7 (3.9) degrees for ulnar deviation, 140.4 (5.7) degrees for supination, and 60.8 (4.9) degrees for pronation. The differences between the measurement values by the Gyroscope application and average value were 0.7 degrees for flexion, -0.2 degrees for extension, 0.5 degrees for radial deviation, 0.7 degrees for ulnar deviation, 0.4 degrees for supination, and 0.8 degrees for pronation. The differences in average value were not statistically significant. The authors introduced a new method of measuring the range of wrist motion using the iPhone 4 Gyroscope application that is simpler to use and can be performed by the patient outside a clinical setting.

  12. Wavelength dependence of position angle in polarization standards

    NASA Astrophysics Data System (ADS)

    Dolan, J. F.; Tapia, S.

    1986-08-01

    Eleven of the 15 stars on Serkowski's (1974) list of "Standard Stars with Large Interstellar Polarization" were investigated to determine whether the orientation of the plane of their linear polarization showed any dependence on wavelength. Nine of the eleven stars exhibited a statistically significant wavelength dependence of position angle when measured with an accuracy of ≡0°.1 standard deviation. For the majority of these stars, the effect is caused primarily by intrinsic polarization. The calibration of polarimeter position angles in a celestial coordinate frame must evidently be done at the 0°.1 level of accuracy by using only carefully selected standard stars or by using other astronomical or laboratory methods.

  13. Wavelength dependence of position angle in polarization standards. [of stellar systems

    NASA Technical Reports Server (NTRS)

    Dolan, J. F.; Tapia, S.

    1986-01-01

    Eleven of the 15 stars on Serkowski's (1974) list of 'Standard Stars with Large Interstellar Polarization' were investigated to determine whether the orientation of the plane of their linear polarization showed any dependence on wavelength. Nine of the eleven stars exhibited a statistically significant wavelength dependence of position angle when measured with an accuracy of about 0.1 deg standard deviation. For the majority of these stars, the effect is caused primarily by intrinsic polarization. The calibration of polarimeter position angles in a celestial coordinate frame must evidently be done at the 0.1 deg level of accuracy by using only carefully selected standard stars or by using other astronomical or laboratory methods.

  14. Turbulence

    NASA Astrophysics Data System (ADS)

    Frisch, Uriel

    1996-01-01

    Written five centuries after the first studies of Leonardo da Vinci and half a century after A.N. Kolmogorov's first attempt to predict the properties of flow, this textbook presents a modern account of turbulence, one of the greatest challenges in physics. "Fully developed turbulence" is ubiquitous in both cosmic and natural environments, in engineering applications and in everyday life. Elementary presentations of dynamical systems ideas, probabilistic methods (including the theory of large deviations) and fractal geometry make this a self-contained textbook. This is the first book on turbulence to use modern ideas from chaos and symmetry breaking. The book will appeal to first-year graduate students in mathematics, physics, astrophysics, geosciences and engineering, as well as professional scientists and engineers.

  15. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  16. Application of Mean of Absolute Deviation Method for the Selection of Best Nonlinear Component Based on Video Encryption

    NASA Astrophysics Data System (ADS)

    Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar

    2013-07-01

    The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.

  17. Synthesis and characteristics of polyarylene ether sulfones

    NASA Technical Reports Server (NTRS)

    Viswanathan, R.; Johnson, B. C.; Ward, T. C.; Mcgrath, J. E.

    1981-01-01

    A method utilizing potassium carbonate/dimethyl acetamide, as base and solvent respectively, was used for the synthesis of several homopolymers and copolymers derived from various bisphenols. It is demonstrated that this method deviates from simple second order kinetics; this deviation being due to the heterogeneous nature of the reaction. Also, it is shown that a liquid induced crystallization process can improve the solvent resistance of these polymers. Finally, a Monte Carlo simulation of the triad distribution of monomers in nonequilibrium copolycondensation is discussed.

  18. Performance evaluations of continuous glucose monitoring systems: precision absolute relative deviation is part of the assessment.

    PubMed

    Obermaier, Karin; Schmelzeisen-Redeker, Günther; Schoemaker, Michael; Klötzer, Hans-Martin; Kirchsteiger, Harald; Eikmeier, Heino; del Re, Luigi

    2013-07-01

    Even though a Clinical and Laboratory Standards Institute proposal exists on the design of studies and performance criteria for continuous glucose monitoring (CGM) systems, it has not yet led to a consistent evaluation of different systems, as no consensus has been reached on the reference method to evaluate them or on acceptance levels. As a consequence, performance assessment of CGM systems tends to be inconclusive, and a comparison of the outcome of different studies is difficult. Published information and available data (as presented in this issue of Journal of Diabetes Science and Technology by Freckmann and coauthors) are used to assess the suitability of several frequently used methods [International Organization for Standardization, continuous glucose error grid analysis, mean absolute relative deviation (MARD), precision absolute relative deviation (PARD)] when assessing performance of CGM systems in terms of accuracy and precision. The combined use of MARD and PARD seems to allow for better characterization of sensor performance. The use of different quantities for calibration and evaluation, e.g., capillary blood using a blood glucose (BG) meter versus venous blood using a laboratory measurement, introduces an additional error source. Using BG values measured in more or less large intervals as the only reference leads to a significant loss of information in comparison with the continuous sensor signal and possibly to an erroneous estimation of sensor performance during swings. Both can be improved using data from two identical CGM sensors worn by the same patient in parallel. Evaluation of CGM performance studies should follow an identical study design, including sufficient swings in glycemia. At least a part of the study participants should wear two identical CGM sensors in parallel. All data available should be used for evaluation, both by MARD and PARD, a good PARD value being a precondition to trust a good MARD value. Results should be analyzed and presented separately for clinically different categories, e.g., hypoglycemia, exercise, or night and day. © 2013 Diabetes Technology Society.

  19. Vertical Structure of Heat and Momentum Transport in the Urban Surface Layer

    NASA Astrophysics Data System (ADS)

    Hrisko, J.; Ramamurthy, P.

    2017-12-01

    Vertical transport of heat and momentum during convective periods is investigated in the urban surface layer using eddy covariance measurements at 5 levels. The Obukhov length is used to divide the dataset into distinct stability regimes: weakly unstable, unstable and very unstable. Our preliminary analysis indicates critical differences in the transport of heat and momentum as the instability increases. Particularly, during periods of increased instability the vertical heat flux deviates from surface layer similarity theory. Further analysis of primary quadrant sweeps and ejections also indicate deviations from the theory, alluding that ejections dominate during convective periods for heat transport, but equally contribute with sweeps for momentum transport. The transport efficiencies of momentum at all 5 levels uniformly decreases as the instability increases, in stark contrast the heat transport efficiencies increase non-linearly as the instability increases. Collectively, these results demonstrate the breakdown of similarity theory during convective periods, and reaffirm that revised and improved methods for characterizing heat and momentum transport in urban areas is needed. These implications could ultimately advance weather prediction and estimation of scalar transport for urban areas susceptible to weather hazards and large amounts of pollution.

  20. An efficient approach for improving virtual machine placement in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  1. Truck driver informational overload, fiscal year 1992. Final report, 1 July 1991-30 September 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacAdam, C.C.

    1992-09-01

    The document represents the final project report for a study entitled 'Truck Driver Informational Overload' sponsored by the Motor Vehicle Manufacturers Association through its Motor Truck Research Committee and associated Operations/Performance Panels. As stated in an initial project statement, the objective of the work was to provide guidance for developing methods for measuring driving characteristics during information processing tasks. The contents of the report contain results from two basic project activities: (1) a literature review on multiple task performance driver information overload, and (2) a description of driving simulator side-task experiments and a discussion of findings from tests conducted withmore » eight subjects. Two of the key findings from a set of disturbance-input tests conducted with the simulator and the eight test subjects were that: (1) standard deviations of vehicle lateral position and heading (yaw) angle measurements showed the greatest sensitivity to the presence of side-task activities during basic information processing tasks, and (2) corresponding standard deviations of driver steering activity, vehicle yaw rate, and lateral acceleration measurements were seen to be largely insensitive indicators of side-task activity.« less

  2. Preliminary Estimation of Deoxynivalenol Excretion through a 24 h Pilot Study

    PubMed Central

    Rodríguez-Carrasco, Yelko; Mañes, Jordi; Berrada, Houda; Font, Guillermina

    2015-01-01

    A duplicate diet study was designed to explore the occurrence of 15 Fusarium mycotoxins in the 24 h-diet consumed by one volunteer as well as the levels of mycotoxins in his 24 h-collected urine. The employed methodology involved solvent extraction at high ionic strength followed by dispersive solid phase extraction and gas chromatography determination coupled to mass spectrometry in tandem. Satisfactory results in method performance were achieved. The method’s accuracy was in a range of 68%–108%, with intra-day relative standard deviation and inter-day relative standard deviation lower than 12% and 15%, respectively. The limits of quantitation ranged from 0.1 to 8 µg/Kg. The matrix effect was evaluated and matrix-matched calibrations were used for quantitation. Only deoxynivalenol (DON) was quantified in both food and urine samples. A total DON daily intake amounted to 49.2 ± 5.6 µg whereas DON daily excretion of 35.2 ± 4.3 µg was determined. DON daily intake represented 68.3% of the established DON provisional maximum tolerable daily intake (PMTDI). Valuable preliminary information was obtained as regards DON excretion and needs to be confirmed in large-scale monitoring studies. PMID:25723325

  3. Conformational landscape of the HIV-V3 hairpin loop from all-atom free-energy simulations

    NASA Astrophysics Data System (ADS)

    Verma, Abhinav; Wenzel, Wolfgang

    2008-03-01

    Small beta hairpins have many distinct biological functions, including their involvement in chemokine and viral receptor recognition. The relevance of structural similarities between different hairpin loops with near homologous sequences is not yet understood, calling for the development of methods for de novo hairpin structure prediction and simulation. De novo folding of beta strands is more difficult than that of helical proteins because of nonlocal hydrogen bonding patterns that connect amino acids that are distant in the amino acid sequence and there is a large variety of possible hydrogen bond patterns. Here we use a greedy version of the basin hopping technique with our free-energy forcefield PFF02 to reproducibly and predictively fold the hairpin structure of a HIV-V3 loop. We performed 20 independent basin hopping runs for 500cycles corresponding to 7.4×107 energy evaluations each. The lowest energy structure found in the simulation has a backbone root mean square deviation (bRMSD) of only 2.04Å to the native conformation. The lowest 9 out of the 20 simulations converged to conformations deviating less than 2.5Å bRMSD from native.

  4. Conformational landscape of the HIV-V3 hairpin loop from all-atom free-energy simulations.

    PubMed

    Verma, Abhinav; Wenzel, Wolfgang

    2008-03-14

    Small beta hairpins have many distinct biological functions, including their involvement in chemokine and viral receptor recognition. The relevance of structural similarities between different hairpin loops with near homologous sequences is not yet understood, calling for the development of methods for de novo hairpin structure prediction and simulation. De novo folding of beta strands is more difficult than that of helical proteins because of nonlocal hydrogen bonding patterns that connect amino acids that are distant in the amino acid sequence and there is a large variety of possible hydrogen bond patterns. Here we use a greedy version of the basin hopping technique with our free-energy forcefield PFF02 to reproducibly and predictively fold the hairpin structure of a HIV-V3 loop. We performed 20 independent basin hopping runs for 500 cycles corresponding to 7.4 x 10(7) energy evaluations each. The lowest energy structure found in the simulation has a backbone root mean square deviation (bRMSD) of only 2.04 A to the native conformation. The lowest 9 out of the 20 simulations converged to conformations deviating less than 2.5 A bRMSD from native.

  5. Comparison and combination of "direct" and fragment based local correlation methods: Cluster in molecules and domain based local pair natural orbital perturbation and coupled cluster theories

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Becker, Ute; Neese, Frank

    2018-03-01

    Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.

  6. Geometric Verification of Dynamic Wave Arc Delivery With the Vero System Using Orthogonal X-ray Fluoroscopic Imaging.

    PubMed

    Burghelea, Manuela; Verellen, Dirk; Poels, Kenneth; Gevaert, Thierry; Depuydt, Tom; Tournel, Koen; Hung, Cecilia; Simon, Viorica; Hiraoka, Masahiro; de Ridder, Mark

    2015-07-15

    The purpose of this study was to define an independent verification method based on on-board orthogonal fluoroscopy to determine the geometric accuracy of synchronized gantry-ring (G/R) rotations during dynamic wave arc (DWA) delivery available on the Vero system. A verification method for DWA was developed to calculate O-ring-gantry (G/R) positional information from ball-bearing positions retrieved from fluoroscopic images of a cubic phantom acquired during DWA delivery. Different noncoplanar trajectories were generated in order to investigate the influence of path complexity on delivery accuracy. The G/R positions detected from the fluoroscopy images (DetPositions) were benchmarked against the G/R angulations retrieved from the control points (CP) of the DWA RT plan and the DWA log files recorded by the treatment console during DWA delivery (LogActed). The G/R rotational accuracy was quantified as the mean absolute deviation ± standard deviation. The maximum G/R absolute deviation was calculated as the maximum 3-dimensional distance between the CP and the closest DetPositions. In the CP versus DetPositions comparison, an overall mean G/R deviation of 0.13°/0.16° ± 0.16°/0.16° was obtained, with a maximum G/R deviation of 0.6°/0.2°. For the LogActed versus DetPositions evaluation, the overall mean deviation was 0.08°/0.15° ± 0.10°/0.10° with a maximum G/R of 0.3°/0.4°. The largest decoupled deviations registered for gantry and ring were 0.6° and 0.4° respectively. No directional dependence was observed between clockwise and counterclockwise rotations. Doubling the dose resulted in a double number of detected points around each CP, and an angular deviation reduction in all cases. An independent geometric quality assurance approach was developed for DWA delivery verification and was successfully applied on diverse trajectories. Results showed that the Vero system is capable of following complex G/R trajectories with maximum deviations during DWA below 0.6°. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. A Quantitative Evaluation of the Flipped Classroom in a Large Lecture Principles of Economics Course

    ERIC Educational Resources Information Center

    Balaban, Rita A.; Gilleskie, Donna B.; Tran, Uyen

    2016-01-01

    This research provides evidence that the flipped classroom instructional format increases student final exam performance, relative to the traditional instructional format, in a large lecture principles of economics course. The authors find that the flipped classroom directly improves performance by 0.2 to 0.7 standardized deviations, depending on…

  8. One-side forward-backward asymmetry in top quark pair production at the CERN Large Hadron Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Youkai; Xiao Bo; Zhu Shouhua

    2010-11-01

    Both D0 and CDF at Tevatron reported the measurements of forward-backward asymmetry in top pair production, which showed possible deviation from the standard model QCD prediction. In this paper, we explore how to examine the same higher-order QCD effects at the more powerful Large Hadron Collider.

  9. Reconstruction and analysis of a deciduous sapling using digital photographs or terrestrial-LiDAR technology.

    PubMed

    Delagrange, Sylvain; Rochon, Pascal

    2011-10-01

    To meet the increasing need for rapid and non-destructive extraction of canopy traits, two methods were used and compared with regard to their accuracy in estimating 2-D and 3-D parameters of a hybrid poplar sapling. The first method consisted of the analysis of high definition photographs in Tree Analyser (TA) software (PIAF-INRA/Kasetsart University). TA allowed the extraction of individual traits using a space carving approach. The second method utilized 3-D point clouds acquired from terrestrial light detection and ranging (T-LiDAR) scans. T-LiDAR scans were performed on trees without leaves to reconstruct the lignified structure of the sapling. From this skeleton, foliage was added using simple modelling rules extrapolated from field measurements. Validation of the estimated dimension and the accuracy of reconstruction was then achieved by comparison with an empirical data set. TA was found to be slightly less precise than T-LiDAR for estimating tree height, canopy height and mean canopy diameter, but for 2-D traits both methods were, however, fully satisfactory. TA tended to over-estimate total leaf area (error up to 50 %), but better estimates were obtained by reducing the size of the voxels used for calculations. In contrast, T-LiDAR estimated total leaf area with an error of <6 %. Finally, both methods led to an over-estimation of canopy volume. With respect to this trait, T-LiDAR (14·5 % deviation) greatly surpassed the accuracy of TA (up to 50 % deviation), even if the voxels used were reduced in size. Taking into account their magnitude of data acquisition and analysis and their accuracy in trait estimations, both methods showed contrasting potential future uses. Specifically, T-LiDAR is a particularly promising tool for investigating the development of large perennial plants, by itself or in association with plant modelling.

  10. Luminosity distance in ``Swiss cheese'' cosmology with randomized voids. II. Magnification probability distributions

    NASA Astrophysics Data System (ADS)

    Flanagan, Éanna É.; Kumar, Naresh; Wasserman, Ira; Vanderveld, R. Ali

    2012-01-01

    We study the fluctuations in luminosity distances due to gravitational lensing by large scale (≳35Mpc) structures, specifically voids and sheets. We use a simplified “Swiss cheese” model consisting of a ΛCDM Friedman-Robertson-Walker background in which a number of randomly distributed nonoverlapping spherical regions are replaced by mass-compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. We compute the distribution of magnitude shifts using a variant of the method of Holz and Wald , which includes the effect of lensing shear. The standard deviation of this distribution is ˜0.027 magnitudes and the mean is ˜0.003 magnitudes for voids of radius 35 Mpc, sources at redshift zs=1.0, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. If the shell walls are given a finite thickness of ˜1Mpc, the standard deviation is reduced to ˜0.013 magnitudes. This standard deviation due to voids is a factor ˜3 smaller than that due to galaxy scale structures. We summarize our results in terms of a fitting formula that is accurate to ˜20%, and also build a simplified analytic model that reproduces our results to within ˜30%. Our model also allows us to explore the domain of validity of weak-lensing theory for voids. We find that for 35 Mpc voids, corrections to the dispersion due to lens-lens coupling are of order ˜4%, and corrections due to shear are ˜3%. Finally, we estimate the bias due to source-lens clustering in our model to be negligible.

  11. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE PAGES

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  12. SU-F-T-564: 3 Year Experience of Treatment Plan QualityAssurance for Vero SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Z; Li, Z; Mamalui, M

    2016-06-15

    Purpose: To verify treatment plan monitor units from iPlan treatment planning system for Vero Stereotactic Body Radiotherapy (SBRT) treatment using both software-based and (homogeneous and heterogeneous) phantom-based approaches. Methods: Dynamic conformal arcs (DCA) were used for SBRT treatment of oligometastasis patients using Vero linear accelerator. For each plan, Monte Carlo calculated treatment plans MU (prescribed dose to water with 1% variance) is verified first by RadCalc software with 3% difference threshold. Beyond 3% differences, treatment plans were copied onto (homogeneous) Scanditronix phantom for non-lung patients and copied onto (heterogeneous) CIRS phantom for lung patients and the corresponding plan dose wasmore » measured using a cc01 ion chamber. The difference between the planed and measured dose was recorded. For the past 3 years, we have treated 180 patients with 315 targets. Out of these patients, 99 targets treatment plan RadCalc calculation exceeded 3% threshold and phantom based measurements were performed with 26 plans using Scanditronix phantom and 73 plans using CIRS phantom. Mean and standard deviation of the dose differences were obtained and presented. Results: For all patient RadCalc calculations, the mean dose difference is 0.76% with a standard deviation of 5.97%. For non-lung patient plan Scanditronix phantom measurements, the mean dose difference is 0.54% with standard deviation of 2.53%; for lung patient plan CIRS phantom measurements, the mean dose difference is −0.04% with a standard deviation of 1.09%; The maximum dose difference is 3.47% for Scanditronix phantom measurements and 3.08% for CIRS phantom measurements. Conclusion: Limitations in secondary MU check software lead to perceived large dose discrepancies for some of the lung patient SBRT treatment plans. Homogeneous and heterogeneous phantoms were used in plan quality assurance for non-lung patients and lung patients, respectively. Phantom based QA showed the relative good agreement between iPlan calculated dose and measured dose.« less

  13. WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, S; Molloy, J

    Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less

  14. Insect thermotolerance comparing host infestation methods: Anastrepha ludens (Diptera: Tephritidae) reared in grapefruit or diet

    USDA-ARS?s Scientific Manuscript database

    Research on insect control should be conducted in a manner that mimics as closely as is feasible its commercial application in all of its practicably conceivable forms. When significant deviations from commercial application are used in research the effect of the deviations on efficacy should be eva...

  15. Nonlinear Elastic Effects on the Energy Flux Deviation of Ultrasonic Waves in GR/EP Composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1992-01-01

    In isotropic materials, the direction of the energy flux (energy per unit time per unit area) of an ultrasonic plane wave is always along the same direction as the normal to the wave front. In anisotropic materials, however, this is true only along symmetry directions. Along other directions, the energy flux of the wave deviates from the intended direction of propagation. This phenomenon is known as energy flux deviation and is illustrated. The direction of the energy flux is dependent on the elastic coefficients of the material. This effect has been demonstrated in many anisotropic crystalline materials. In transparent quartz crystals, Schlieren photographs have been obtained which allow visualization of the ultrasonic waves and the energy flux deviation. The energy flux deviation in graphite/epoxy (gr/ep) composite materials can be quite large because of their high anisotropy. The flux deviation angle has been calculated for unidirectional gr/ep composites as a function of both fiber orientation and fiber volume content. Experimental measurements have also been made in unidirectional composites. It has been further demonstrated that changes in composite materials which alter the elastic properties such as moisture absorption by the matrix or fiber degradation, can be detected nondestructively by measurements of the energy flux shift. In this research, the effects of nonlinear elasticity on energy flux deviation in unidirectional gr/ep composites were studied. Because of elastic nonlinearity, the angle of the energy flux deviation was shown to be a function of applied stress. This shift in flux deviation was modeled using acoustoelastic theory and the previously measured second and third order elastic stiffness coefficients for T300/5208 gr/ep. Two conditions of applied uniaxial stress were considered. In the first case, the direction of applied uniaxial stress was along the fiber axis (x3) while in the second case it was perpendicular to the fiber axis along the laminate stacking direction (x1).

  16. Phylogenetic rooting using minimal ancestor deviation.

    PubMed

    Tria, Fernando Domingues Kümmel; Landan, Giddy; Dagan, Tal

    2017-06-19

    Ancestor-descendent relations play a cardinal role in evolutionary theory. Those relations are determined by rooting phylogenetic trees. Existing rooting methods are hampered by evolutionary rate heterogeneity or the unavailability of auxiliary phylogenetic information. Here we present a rooting approach, the minimal ancestor deviation (MAD) method, which accommodates heterotachy by using all pairwise topological and metric information in unrooted trees. We demonstrate the performance of the method, in comparison to existing rooting methods, by the analysis of phylogenies from eukaryotes and prokaryotes. MAD correctly recovers the known root of eukaryotes and uncovers evidence for the origin of cyanobacteria in the ocean. MAD is more robust and consistent than existing methods, provides measures of the root inference quality and is applicable to any tree with branch lengths.

  17. Wavelength selection method with standard deviation: application to pulse oximetry.

    PubMed

    Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija

    2011-07-01

    Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.

  18. A novel method to replicate the kinematics of the carpus using a six degree-of-freedom robot.

    PubMed

    Fraysse, François; Costi, John J; Stanley, Richard M; Ding, Boyin; McGuire, Duncan; Eng, Kevin; Bain, Gregory I; Thewlis, Dominic

    2014-03-21

    Understanding the kinematics of the carpus is essential to the understanding and treatment of wrist pathologies. However, many of the previous techniques presented are limited by non-functional motion or the interpolation of points from static images at different postures. We present a method that has the capability of replicating the kinematics of the wrist during activities of daily living using a unique mechanical testing system. To quantify the kinematics of the carpal bones, we used bone pin-mounted markers and optical motion capture methods. In this paper, we present a hammering motion as an example of an activity of daily living. However, the method can be applied to a wide variety of movements. Our method showed good accuracy (1.0-2.6°) of in vivo movement reproduction in our ex vivo model. Most carpal motion during wrist flexion-extension occurs at the radiocarpal level while in ulnar deviation the motion is more equally shared between radiocarpal and midcarpal joints, and in radial deviation the motion happens mainly at the midcarpal joint. For all rotations, there was more rotation of the midcarpal row relative to the lunate than relative to the scaphoid or triquetrum. For the functional motion studied (hammering), there was more midcarpal motion in wrist extension compared to pure wrist extension while radioulnar deviation patterns were similar to those observed in pure wrist radioulnar deviation. Finally, it was found that for the amplitudes studied the amount of carpal rotations was proportional to global wrist rotations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. THE AXISYMMETRIC FREE-CONVECTION HEAT TRANSFER ALONG A VERTICAL THIN CYLINDER WITH CONSTANT SURFACE TEMPERATURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viskanta, R.

    1963-01-01

    Laminar free-convection flow produced by a heated, vertical, circular cylinder for which the temperature at the outer surface of the cylinder is assumed to be uniform is analyzed. The solution of the boundary-layer equations was obtained by the perturbation method of Sparrow and Gregg, which is valid only for small values of the axial distance parameter xi ; and the integral method of Hama et al., for large values of the parameter xi . Heat-transfer results were calculated for Prandtl numbers (Pr) of 100, the Nusselt numbers (Nu) for the cylinder were higher than those for the flat plate, andmore » this difference increased as Pr decreased. It was also found that the perturbation method of solution of the free-convection boundary-layer equations becomes useless for small values of Pr because of the slow convergence of the series. The results obtained by the integral method were in good agreement with those calculated by the perturbation method for Pr approximately 1 and 0.1 < xi < 1 only; they deviated considerably for smaller values of xi . (auth)« less

  20. Stochastic analysis of uncertain thermal parameters for random thermal regime of frozen soil around a single freezing pipe

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei

    2018-03-01

    The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.

  1. MO-FG-CAMPUS-TeP1-01: An Efficient Method of 3D Patient Dose Reconstruction Based On EPID Measurements for Pre-Treatment Patient Specific QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, R; Lee, C; Calvary Mater Newcastle, Newcastle

    Purpose: To demonstrate an efficient and clinically relevant patient specific QA method by reconstructing 3D patient dose from 2D EPID images for IMRT plans. Also to determine the usefulness of 2D QA metrics when assessing 3D patient dose deviations. Methods: Using the method developed by King et al (Med Phys 39(5),2839–2847), EPID images of IMRT fields were acquired in air and converted to dose at 10 cm depth (SAD setup) in a flat virtual water phantom. Each EPID measured dose map was then divided by the corresponding treatment planning system (TPS) dose map calculated with an identical setup, to derivemore » a 2D “error matrix”. For each field, the error matrix was used to adjust the doses along the respective ray lines in the original patient 3D dose. All field doses were combined to derive a reconstructed 3D patient dose for quantitative analysis. A software tool was developed to efficiently implement the entire process and was tested with a variety of IMRT plans for 2D (virtual flat phantom) and 3D (in-patient) QA analysis. Results: The method was tested on 60 IMRT plans. The mean (± standard deviation) 2D gamma (2%,2mm) pass rate (2D-GPR) was 97.4±3.0% and the mean 2D gamma index (2D-GI) was 0.35±0.06. The 3D PTV mean dose deviation was 1.8±0.8%. The analysis showed very weak correlations between both the 2D-GPR and 2D-GI when compared with PTV mean dose deviations (R2=0.3561 and 0.3632 respectively). Conclusion: Our method efficiently calculates 3D patient dose from 2D EPID images, utilising all of the advantages of an EPID-based dosimetry system. In this study, the 2D QA metrics did not predict the 3D patient dose deviation. This tool allows reporting of the 3D volumetric dose parameters thus providing more clinically relevant patient specific QA.« less

  2. Implementation and Validation of an Impedance Eduction Technique

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.; Gerhold, Carl H.

    2011-01-01

    Implementation of a pressure gradient method of impedance eduction in two NASA Langley flow ducts is described. The Grazing Flow Impedance Tube only supports plane-wave sources, while the Curved Duct Test Rig supports sources that contain higher-order modes. Multiple exercises are used to validate this new impedance eduction method. First, synthesized data for a hard wall insert and a conventional liner mounted in the Grazing Flow Impedance Tube are used as input to the two impedance eduction methods, the pressure gradient method and a previously validated wall pressure method. Comparisons between the two results are excellent. Next, data measured in the Grazing Flow Impedance Tube are used as input to both methods. Results from the two methods compare quite favorably for sufficiently low Mach numbers but this comparison degrades at Mach 0.5, especially when the hard wall insert is used. Finally, data measured with a hard wall insert mounted in the Curved Duct Test Rig are used as input to the pressure gradient method. Significant deviation from the known solution is observed, which is believed to be largely due to 3-D effects in this flow duct. Potential solutions to this issue are currently being explored.

  3. Variability of pesticide detections and concentrations in field replicate water samples collected for the National Water-Quality Assessment Program, 1992-97

    USGS Publications Warehouse

    Martin, Jeffrey D.

    2002-01-01

    Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.

  4. Approximate median regression for complex survey data with skewed response.

    PubMed

    Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi

    2016-12-01

    The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.

  5. Approximate Median Regression for Complex Survey Data with Skewed Response

    PubMed Central

    Fraser, Raphael André; Lipsitz, Stuart R.; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Pan, Yi

    2016-01-01

    Summary The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling and weighting. In this paper, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS) based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. PMID:27062562

  6. Prior robust empirical Bayes inference for large-scale data by conditioning on rank with application to microarray data

    PubMed Central

    Liao, J. G.; Mcmurry, Timothy; Berg, Arthur

    2014-01-01

    Empirical Bayes methods have been extensively used for microarray data analysis by modeling the large number of unknown parameters as random effects. Empirical Bayes allows borrowing information across genes and can automatically adjust for multiple testing and selection bias. However, the standard empirical Bayes model can perform poorly if the assumed working prior deviates from the true prior. This paper proposes a new rank-conditioned inference in which the shrinkage and confidence intervals are based on the distribution of the error conditioned on rank of the data. Our approach is in contrast to a Bayesian posterior, which conditions on the data themselves. The new method is almost as efficient as standard Bayesian methods when the working prior is close to the true prior, and it is much more robust when the working prior is not close. In addition, it allows a more accurate (but also more complex) non-parametric estimate of the prior to be easily incorporated, resulting in improved inference. The new method’s prior robustness is demonstrated via simulation experiments. Application to a breast cancer gene expression microarray dataset is presented. Our R package rank.Shrinkage provides a ready-to-use implementation of the proposed methodology. PMID:23934072

  7. Poster - Thur Eve - 68: Evaluation and analytical comparison of different 2D and 3D treatment planning systems using dosimetry in anthropomorphic phantom.

    PubMed

    Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S

    2012-07-01

    The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.

  8. Apparent diffusion coefficient histogram metrics correlate with survival in diffuse intrinsic pontine glioma: a report from the Pediatric Brain Tumor Consortium

    PubMed Central

    Poussaint, Tina Young; Vajapeyam, Sridhar; Ricci, Kelsey I.; Panigrahy, Ashok; Kocak, Mehmet; Kun, Larry E.; Boyett, James M.; Pollack, Ian F.; Fouladi, Maryam

    2016-01-01

    Background Diffuse intrinsic pontine glioma (DIPG) is associated with poor survival regardless of therapy. We used volumetric apparent diffusion coefficient (ADC) histogram metrics to determine associations with progression-free survival (PFS) and overall survival (OS) at baseline and after radiation therapy (RT). Methods Baseline and post-RT quantitative ADC histograms were generated from fluid-attenuated inversion recovery (FLAIR) images and enhancement regions of interest. Metrics assessed included number of peaks (ie, unimodal or bimodal), mean and median ADC, standard deviation, mode, skewness, and kurtosis. Results Based on FLAIR images, the majority of tumors had unimodal peaks with significantly shorter average survival. Pre-RT FLAIR mean, mode, and median values were significantly associated with decreased risk of progression; higher pre-RT ADC values had longer PFS on average. Pre-RT FLAIR skewness and standard deviation were significantly associated with increased risk of progression; higher pre-RT FLAIR skewness and standard deviation had shorter PFS. Nonenhancing tumors at baseline showed higher ADC FLAIR mean values, lower kurtosis, and higher PFS. For enhancing tumors at baseline, bimodal enhancement histograms had much worse PFS and OS than unimodal cases and significantly lower mean peak values. Enhancement in tumors only after RT led to significantly shorter PFS and OS than in patients with baseline or no baseline enhancement. Conclusions ADC histogram metrics in DIPG demonstrate significant correlations between diffusion metrics and survival, with lower diffusion values (increased cellularity), increased skewness, and enhancement associated with shorter survival, requiring future investigations in large DIPG clinical trials. PMID:26487690

  9. Robust isotropic super-resolution by maximizing a Laplace posterior for MRI volumes

    NASA Astrophysics Data System (ADS)

    Han, Xian-Hua; Iwamoto, Yutaro; Shiino, Akihiko; Chen, Yen-Wei

    2014-03-01

    Magnetic resonance imaging can only acquire volume data with finite resolution due to various factors. In particular, the resolution in one direction (such as the slice direction) is much lower than others (such as the in-plane direction), yielding un-realistic visualizations. This study explores to reconstruct MRI isotropic resolution volumes from three orthogonal scans. This proposed super- resolution reconstruction is formulated as a maximum a posterior (MAP) problem, which relies on the generation model of the acquired scans from the unknown high-resolution volumes. Generally, the deviation ensemble of the reconstructed high-resolution (HR) volume from the available LR ones in the MAP is represented as a Gaussian distribution, which usually results in some noise and artifacts in the reconstructed HR volume. Therefore, this paper investigates a robust super-resolution by formulating the deviation set as a Laplace distribution, which assumes sparsity in the deviation ensemble based on the possible insight of the appeared large values only around some unexpected regions. In addition, in order to achieve reliable HR MRI volume, we integrates the priors such as bilateral total variation (BTV) and non-local mean (NLM) into the proposed MAP framework for suppressing artifacts and enriching visual detail. We validate the proposed robust SR strategy using MRI mouse data with high-definition resolution in two direction and low-resolution in one direction, which are imaged in three orthogonal scans: axial, coronal and sagittal planes. Experiments verifies that the proposed strategy can achieve much better HR MRI volumes than the conventional MAP method even with very high-magnification factor: 10.

  10. Analysis of gait patterns pre- and post- Single Event Multilevel Surgery in children with Cerebral Palsy by means of Offset-Wise Movement Analysis Profile and Linear Fit Method.

    PubMed

    Ancillao, Andrea; van der Krogt, Marjolein M; Buizer, Annemieke I; Witbreuk, Melinda M; Cappa, Paolo; Harlaar, Jaap

    2017-10-01

    Gait analysis is used for the assessment of walking ability of children with cerebral palsy (CP), to inform clinical decision making and to quantify changes after treatment. To simplify gait analysis interpretation and to quantify deviations from normality, some quantitative synthetic descriptors were developed over the years, such as the Movement Analysis Profile (MAP) and the Linear Fit Method (LFM), but their interpretation is not always straightforward. The aims of this work were to: (i) study gait changes, by means of synthetic descriptors, in children with CP that underwent Single Event Multilevel Surgery; (ii) compare the MAP and the LFM on these patients; (iii) design a new index that may overcome the limitations of the previous methods, i.e. the lack of information about the direction of deviation or its source. Gait analysis exams of 10 children with CP, pre- and post-surgery, were collected and MAP and LFM were computed. A new index was designed asa modified version of the MAP by separating out changes in offset (named OC-MAP). MAP documented an improvement in the gait pattern after surgery. The highest effect was observed for the knee flexion/extension angle. However, a worsening was observed as an increase in anterior pelvic tilt. An important source of gait deviation was recognized in the offset between observed tracks and reference. OC-MAP allowed the assessment of the offset component versus the shape component of deviation. LFM provided results similar to OC-MAP offset analysis but could not be considered reliable due to intrinsic limitations. As offset in gait features played an important role in gait deviation, OC-MAP synthetic analysis was proposed as a novel approach to a meaningful parameterisation of global deviations in gait patterns of subjects with CP and gait changes after treatment. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Possibilities and limitations of the kinetic plot method in supercritical fluid chromatography.

    PubMed

    De Pauw, Ruben; Desmet, Gert; Broeckhoven, Ken

    2013-08-30

    Although supercritical fluid chromatography (SFC) is becoming a technique of increasing importance in the field of analytical chromatography, methods to compare the performance of SFC-columns and separations in an unbiased way are not fully developed. The present study uses mathematical models to investigate the possibilities and limitations of the kinetic plot method in SFC as this easily allows to investigate a wide range of operating pressures, retention and mobile phase conditions. The variable column length (L) kinetic plot method was further investigated in this work. Since the pressure history is identical for each measurement, this method gives the true kinetic performance limit in SFC. The deviations of the traditional way of measuring the performance as a function of flow rate (fixed back pressure and column length) and the isopycnic method with respect to this variable column length method were investigated under a wide range of operational conditions. It is found that using the variable L method, extrapolations towards other pressure drops are not valid in SFC (deviation of ∼15% for extrapolation from 50 to 200bar pressure drop). The isopycnic method provides the best prediction but its use is limited when operating closer towards critical point conditions. When an organic modifier is used, the predictions are improved for both methods with respect to the variable L method (e.g. deviations decreases from 20% to 2% when 20mol% of methanol is added). Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Atomic displacements in the charge ice pyrochlore Bi2Ti2O6O' studied by neutron total scattering

    NASA Astrophysics Data System (ADS)

    Shoemaker, Daniel P.; Seshadri, Ram; Hector, Andrew L.; Llobet, Anna; Proffen, Thomas; Fennie, Craig J.

    2010-04-01

    The oxide pyrochlore Bi2Ti2O6O' is known to be associated with large displacements of Bi and O' atoms from their ideal crystallographic positions. Neutron total scattering, analyzed in both reciprocal and real space, is employed here to understand the nature of these displacements. Rietveld analysis and maximum entropy methods are used to produce an average picture of the structural nonideality. Local structure is modeled via large-box reverse Monte Carlo simulations constrained simultaneously by the Bragg profile and real-space pair distribution function. Direct visualization and statistical analyses of these models show the precise nature of the static Bi and O' displacements. Correlations between neighboring Bi displacements are analyzed using coordinates from the large-box simulations. The framework of continuous symmetry measures has been applied to distributions of O'Bi4 tetrahedra to examine deviations from ideality. Bi displacements from ideal positions appear correlated over local length scales. The results are consistent with the idea that these nonmagnetic lone-pair containing pyrochlore compounds can be regarded as highly structurally frustrated systems.

  13. Seismic velocity deviation log: An effective method for evaluating spatial distribution of reservoir pore types

    NASA Astrophysics Data System (ADS)

    Shirmohamadi, Mohamad; Kadkhodaie, Ali; Rahimpour-Bonab, Hossain; Faraji, Mohammad Ali

    2017-04-01

    Velocity deviation log (VDL) is a synthetic log used to determine pore types in reservoir rocks based on a combination of the sonic log with neutron-density logs. The current study proposes a two step approach to create a map of porosity and pore types by integrating the results of petrographic studies, well logs and seismic data. In the first step, velocity deviation log was created from the combination of the sonic log with the neutron-density log. The results allowed identifying negative, zero and positive deviations based on the created synthetic velocity log. Negative velocity deviations (below - 500 m/s) indicate connected or interconnected pores and fractures, while positive deviations (above + 500 m/s) are related to isolated pores. Zero deviations in the range of [- 500 m/s, + 500 m/s] are in good agreement with intercrystalline and microporosities. The results of petrographic studies were used to validate the main pore type derived from velocity deviation log. In the next step, velocity deviation log was estimated from seismic data by using a probabilistic neural network model. For this purpose, the inverted acoustic impedance along with the amplitude based seismic attributes were formulated to VDL. The methodology is illustrated by performing a case study from the Hendijan oilfield, northwestern Persian Gulf. The results of this study show that integration of petrographic, well logs and seismic attributes is an instrumental way for understanding the spatial distribution of main reservoir pore types.

  14. A Validated Method for the Quality Control of Andrographis paniculata Preparations.

    PubMed

    Karioti, Anastasia; Timoteo, Patricia; Bergonzi, Maria Camilla; Bilia, Anna Rita

    2017-10-01

    Andrographis paniculata is a herbal drug of Asian traditional medicine largely employed for the treatment of several diseases. Recently, it has been introduced in Europe for the prophylactic and symptomatic treatment of common cold and as an ingredient of dietary supplements. The active principles are diterpenes with andrographolide as the main representative. In the present study, an analytical protocol was developed for the determination of the main constituents in the herb and preparations of A. paniculata . Three different extraction protocols (methanol extraction using a modified Soxhlet procedure, maceration under ultrasonication, and decoction) were tested. Ultrasonication achieved the highest content of analytes. HPLC conditions were optimized in terms of solvent mixtures, time course, and temperature. A reversed phase C18 column eluted with a gradient system consisting of acetonitrile and acidified water and including an isocratic step at 30 °C was used. The HPLC method was validated for linearity, limits of quantitation and detection, repeatability, precision, and accuracy. The overall method was validated for precision and accuracy over at least three different concentration levels. Relative standard deviation was less than 1.13%, whereas recovery was between 95.50% and 97.19%. The method also proved to be suitable for the determination of a large number of commercial samples and was proposed to the European Pharmacopoeia for the quality control of Andrographidis herba. Georg Thieme Verlag KG Stuttgart · New York.

  15. Open inflation in the landscape

    NASA Astrophysics Data System (ADS)

    Yamauchi, Daisuke; Linde, Andrei; Naruko, Atsushi; Sasaki, Misao; Tanaka, Takahiro

    2011-08-01

    The open inflation scenario is attracting a renewed interest in the context of the string landscape. Since there are a large number of metastable de Sitter vacua in the string landscape, tunneling transitions to lower metastable vacua through the bubble nucleation occur quite naturally, which leads to a natural realization of open inflation. Although the deviation of Ω0 from unity is small by the observational bound, we argue that the effect of this small deviation on the large-angle CMB anisotropies can be significant for tensor-type perturbation in the open inflation scenario. We consider the situation in which there is a large hierarchy between the energy scale of the quantum tunneling and that of the slow-roll inflation in the nucleated bubble. If the potential just after tunneling is steep enough, a rapid-roll phase appears before the slow-roll inflation. In this case the power spectrum is basically determined by the Hubble rate during the slow-roll inflation. On the other hand, if such a rapid-roll phase is absent, the power spectrum keeps the memory of the high energy density there in the large angular components. Furthermore, the amplitude of large angular components can be enhanced due to the effects of the wall fluctuation mode if the bubble wall tension is small. Therefore, although even the dominant quadrupole component is suppressed by the factor (1-Ω0)2, one can construct some models in which the deviation of Ω0 from unity is large enough to produce measurable effects. We also consider a more general class of models, where the false vacuum decay may occur due to Hawking-Moss tunneling, as well as the models involving more than one scalar field. We discuss scalar perturbations in these models and point out that a large set of such models is already ruled out by observational data, unless there was a very long stage of slow-roll inflation after the tunneling. These results show that observational data allow us to test various assumptions concerning the structure of the string theory potentials and the duration of the last stage of inflation.

  16. Detection of Cardiac Quiescence from B-Mode Echocardiography Using a Correlation-Based Frame-to-Frame Deviation Measure

    PubMed Central

    Mcclellan, James H.; Ravichandran, Lakshminarayan; Tridandapani, Srini

    2013-01-01

    Two novel methods for detecting cardiac quiescent phases from B-mode echocardiography using a correlation-based frame-to-frame deviation measure were developed. Accurate knowledge of cardiac quiescence is crucial to the performance of many imaging modalities, including computed tomography coronary angiography (CTCA). Synchronous electrocardiography (ECG) and echocardiography data were obtained from 10 healthy human subjects (four male, six female, 23–45 years) and the interventricular septum (IVS) was observed using the apical four-chamber echocardiographic view. The velocity of the IVS was derived from active contour tracking and verified using tissue Doppler imaging echocardiography methods. In turn, the frame-to-frame deviation methods for identifying quiescence of the IVS were verified using active contour tracking. The timing of the diastolic quiescent phase was found to exhibit both inter- and intra-subject variability, suggesting that the current method of CTCA gating based on the ECG is suboptimal and that gating based on signals derived from cardiac motion are likely more accurate in predicting quiescence for cardiac imaging. Two robust and efficient methods for identifying cardiac quiescent phases from B-mode echocardiographic data were developed and verified. The methods presented in this paper will be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality. PMID:26609501

  17. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  18. Quantifying complexity of financial short-term time series by composite multiscale entropy measure

    NASA Astrophysics Data System (ADS)

    Niu, Hongli; Wang, Jun

    2015-05-01

    It is significant to study the complexity of financial time series since the financial market is a complex evolved dynamic system. Multiscale entropy is a prevailing method used to quantify the complexity of a time series. Due to its less reliability of entropy estimation for short-term time series at large time scales, a modification method, the composite multiscale entropy, is applied to the financial market. To qualify its effectiveness, its applications in the synthetic white noise and 1 / f noise with different data lengths are reproduced first in the present paper. Then it is introduced for the first time to make a reliability test with two Chinese stock indices. After conducting on short-time return series, the CMSE method shows the advantages in reducing deviations of entropy estimation and demonstrates more stable and reliable results when compared with the conventional MSE algorithm. Finally, the composite multiscale entropy of six important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  19. The development and validation of an UHPLC–MS/MS method for the rapid quantification of the antiretroviral agent dapivirine in human plasma

    PubMed Central

    Seserko, Lauren A; Emory, Joshua F; Hendrix, Craig W; Marzinke, Mark A

    2014-01-01

    Background Dapivirine is a non-nucleoside reverse transcriptase inhibitor designed to prevent HIV-1 viral replication and subsequent propagation. A sensitive method is required to quantify plasma concentrations to assess drug efficacy. Results Dapivirine-spiked plasma was combined with acetonitrile containing deuterated IS and was processed for analysis. The method has an analytical measuring range from 20 to 10,000 pg/ml. For the LLOQ, low, mid and high QCs, intra- and inter-assay precision (%CV) ranged from 5.58 to 13.89% and 5.23 to 13.36%, respectively, and intra- and inter-day accuracy (% deviation) ranged from -5.61 to 0.75% and -4.30 to 6.24%, respectively. Conclusion A robust and sensitive LC–MS/MS assay for the high-throughput quantification of the antiretroviral drug dapivirine in human plasma was developed and validated following bioanalytical validation guidelines. The assay meets criteria for the analysis of samples from large research trials. PMID:24256358

  20. The development and validation of an UHPLC-MS/MS method for the rapid quantification of the antiretroviral agent dapivirine in human plasma.

    PubMed

    Seserko, Lauren A; Emory, Joshua F; Hendrix, Craig W; Marzinke, Mark A

    2013-11-01

    Dapivirine is a non-nucleoside reverse transcriptase inhibitor designed to prevent HIV-1 viral replication and subsequent propagation. A sensitive method is required to quantify plasma concentrations to assess drug efficacy. Dapivirine-spiked plasma was combined with acetonitrile containing deuterated IS and was processed for analysis. The method has an analytical measuring range from 20 to 10,000 pg/ml. For the LLOQ, low, mid and high QCs, intra- and inter-assay precision (%CV) ranged from 5.58 to 13.89% and 5.23 to 13.36%, respectively, and intra- and inter-day accuracy (% deviation) ranged from -5.61 to 0.75% and -4.30 to 6.24%, respectively. A robust and sensitive LC-MS/MS assay for the high-throughput quantification of the antiretroviral drug dapivirine in human plasma was developed and validated following bioanalytical validation guidelines. The assay meets criteria for the analysis of samples from large research trials.

  1. SU-E-T-603: Analysis of Optical Tracked Head Inter-Fraction Movements Within Masks to Access Intracranial Immobilization Techniques in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsi, W; Zeidan, O

    2014-06-01

    Purpose: We present a quantitative methodology utilizing an optical tracking system for monitoring head inter-fraction movements within brain masks to assess the effectiveness of two intracranial immobilization techniques. Methods and Materials: A 3-point-tracking method was developed to measure the mask location for a treatment field at each fraction. Measured displacement of mask location to its location at first fraction is equivalent to the head movement within the mask. Head movements for each of treatment fields were measured over about 10 fractions at each patient for seven patients; five treated in supine and two treated in prone. The Q-fix Base-of-Skull headmore » frame was used in supine while the CIVCO uni-frame baseplate was used in prone. Displacements of recoded couch position of each field post imaging at each fraction were extracted for those seven patients. Standard deviation (S.D.) of head movements and couch displacements was scored for statistical analysis. Results: The accuracy of 3PtTrack method was within 1.0 mm by phantom measurements. Patterns of head movement and couch displacement were similar for patients treated in either supine or prone. In superior-inferior direction, mean value of scored standard deviations over seven patients were 1.6 mm and 3.4 mm for the head movement and the couch displacement, respectively. The result indicated that the head movement combined with a loose fixation between the mask-to-head frame results large couch displacements for each patient, and also large variation between patients. However, the head movement is the main cause for the couch displacement with similar magnitude of around 1.0 mm in anterior-posterior and lateral directions. Conclusions: Optical-tracking methodology independently quantifying head movements could improve immobilization devices by correctly acting on causes for head motions within mask. A confidence in the quality of intracranial immobilization techniques could be more efficient by eliminating the need for frequent imaging.« less

  2. TU-C-BRE-11: 3D EPID-Based in Vivo Dosimetry: A Major Step Forward Towards Optimal Quality and Safety in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mijnheer, B; Mans, A; Olaciregui-Ruiz, I

    Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is donemore » offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.« less

  3. Active Site Detection by Spatial Conformity and Electrostatic Analysis—Unravelling a Proteolytic Function in Shrimp Alkaline Phosphatase

    PubMed Central

    Chakraborty, Sandeep; Minda, Renu; Salaye, Lipika; Bhattacharjee, Swapan K.; Rao, Basuthkar J.

    2011-01-01

    Computational methods are increasingly gaining importance as an aid in identifying active sites. Mostly these methods tend to have structural information that supplement sequence conservation based analyses. Development of tools that compute electrostatic potentials has further improved our ability to better characterize the active site residues in proteins. We have described a computational methodology for detecting active sites based on structural and electrostatic conformity - C ata L ytic A ctive S ite P rediction (CLASP). In our pipelined model, physical 3D signature of any particular enzymatic function as defined by its active sites is used to obtain spatially congruent matches. While previous work has revealed that catalytic residues have large pKa deviations from standard values, we show that for a given enzymatic activity, electrostatic potential difference (PD) between analogous residue pairs in an active site taken from different proteins of the same family are similar. False positives in spatially congruent matches are further pruned by PD analysis where cognate pairs with large deviations are rejected. We first present the results of active site prediction by CLASP for two enzymatic activities - β-lactamases and serine proteases, two of the most extensively investigated enzymes. The results of CLASP analysis on motifs extracted from Catalytic Site Atlas (CSA) are also presented in order to demonstrate its ability to accurately classify any protein, putative or otherwise, with known structure. The source code and database is made available at www.sanchak.com/clasp/. Subsequently, we probed alkaline phosphatases (AP), one of the well known promiscuous enzymes, for additional activities. Such a search has led us to predict a hitherto unknown function of shrimp alkaline phosphatase (SAP), where the protein acts as a protease. Finally, we present experimental evidence of the prediction by CLASP by showing that SAP indeed has protease activity in vitro. PMID:22174814

  4. QC-ART: A tool for real-time quality control assessment of mass spectrometry-based proteomics data.

    PubMed

    Stanfill, Bryan A; Nakayasu, Ernesto S; Bramer, Lisa M; Thompson, Allison M; Ansong, Charles K; Clauss, Therese; Gritsenko, Marina A; Monroe, Matthew E; Moore, Ronald J; Orton, Daniel J; Piehowski, Paul D; Schepmoes, Athena A; Smith, Richard D; Webb-Robertson, Bobbie-Jo; Metz, Thomas O

    2018-04-17

    Liquid chromatography-mass spectrometry (LC-MS)-based proteomics studies of large sample cohorts can easily require from months to years to complete. Acquiring consistent, high-quality data in such large-scale studies is challenging because of normal variations in instrumentation performance over time, as well as artifacts introduced by the samples themselves, such as those due to collection, storage and processing. Existing quality control methods for proteomics data primarily focus on post-hoc analysis to remove low-quality data that would degrade downstream statistics; they are not designed to evaluate the data in near real-time, which would allow for interventions as soon as deviations in data quality are detected.  In addition to flagging analyses that demonstrate outlier behavior, evaluating how the data structure changes over time can aide in understanding typical instrument performance or identify issues such as a degradation in data quality due to the need for instrument cleaning and/or re-calibration.  To address this gap for proteomics, we developed Quality Control Analysis in Real-Time (QC-ART), a tool for evaluating data as they are acquired in order to dynamically flag potential issues with instrument performance or sample quality.  QC-ART has similar accuracy as standard post-hoc analysis methods with the additional benefit of real-time analysis.  We demonstrate the utility and performance of QC-ART in identifying deviations in data quality due to both instrument and sample issues in near real-time for LC-MS-based plasma proteomics analyses of a sample subset of The Environmental Determinants of Diabetes in the Young cohort. We also present a case where QC-ART facilitated the identification of oxidative modifications, which are often underappreciated in proteomic experiments. Published under license by The American Society for Biochemistry and Molecular Biology, Inc.

  5. Comparison of Predicted Thermoelectric Energy Conversion Efficiency by Cumulative Properties and Reduced Variables Approaches

    NASA Astrophysics Data System (ADS)

    Linker, Thomas M.; Lee, Glenn S.; Beekman, Matt

    2018-06-01

    The semi-analytical methods of thermoelectric energy conversion efficiency calculation based on the cumulative properties approach and reduced variables approach are compared for 21 high performance thermoelectric materials. Both approaches account for the temperature dependence of the material properties as well as the Thomson effect, thus the predicted conversion efficiencies are generally lower than that based on the conventional thermoelectric figure of merit ZT for nearly all of the materials evaluated. The two methods also predict material energy conversion efficiencies that are in very good agreement which each other, even for large temperature differences (average percent difference of 4% with maximum observed deviation of 11%). The tradeoff between obtaining a reliable assessment of a material's potential for thermoelectric applications and the complexity of implementation of the three models, as well as the advantages of using more accurate modeling approaches in evaluating new thermoelectric materials, are highlighted.

  6. Fast large-scale clustering of protein structures using Gauss integrals.

    PubMed

    Harder, Tim; Borg, Mikael; Boomsma, Wouter; Røgen, Peter; Hamelryck, Thomas

    2012-02-15

    Clustering protein structures is an important task in structural bioinformatics. De novo structure prediction, for example, often involves a clustering step for finding the best prediction. Other applications include assigning proteins to fold families and analyzing molecular dynamics trajectories. We present Pleiades, a novel approach to clustering protein structures with a rigorous mathematical underpinning. The method approximates clustering based on the root mean square deviation by first mapping structures to Gauss integral vectors--which were introduced by Røgen and co-workers--and subsequently performing K-means clustering. Compared to current methods, Pleiades dramatically improves on the time needed to perform clustering, and can cluster a significantly larger number of structures, while providing state-of-the-art results. The number of low energy structures generated in a typical folding study, which is in the order of 50,000 structures, can be clustered within seconds to minutes.

  7. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  8. Importance-sampling computation of statistical properties of coupled oscillators

    NASA Astrophysics Data System (ADS)

    Gupta, Shamik; Leitão, Jorge C.; Altmann, Eduardo G.

    2017-07-01

    We introduce and implement an importance-sampling Monte Carlo algorithm to study systems of globally coupled oscillators. Our computational method efficiently obtains estimates of the tails of the distribution of various measures of dynamical trajectories corresponding to states occurring with (exponentially) small probabilities. We demonstrate the general validity of our results by applying the method to two contrasting cases: the driven-dissipative Kuramoto model, a paradigm in the study of spontaneous synchronization; and the conservative Hamiltonian mean-field model, a prototypical system of long-range interactions. We present results for the distribution of the finite-time Lyapunov exponent and a time-averaged order parameter. Among other features, our results show most notably that the distributions exhibit a vanishing standard deviation but a skewness that is increasing in magnitude with the number of oscillators, implying that nontrivial asymmetries and states yielding rare or atypical values of the observables persist even for a large number of oscillators.

  9. In-situ health monitoring of piezoelectric sensors using electromechanical impedance: A numerical perspective

    NASA Astrophysics Data System (ADS)

    Bilgunde, Prathamesh N.; Bond, Leonard J.

    2018-04-01

    Current work presents a numerical investigation to classify the in-situ health of the piezoelectric sensors deployed for structural health monitoring (SHM) of large civil, aircraft and automotive structures. The methodology proposed in this work attempts to model the in-homogeneities in the adhesive with which typically the sensor is bonded to the structure for SHM. It was found that weakening of the bond state causes reduction in the resonance frequency of the structure and eventually approaches the resonance characteristics of a piezoelectric material under traction-free boundary conditions. These changes in the resonance spectrum are further quantified using root mean square deviation-based damage index. Results demonstrate that the electromechanical impedance method can be used to monitor structural integrity of the sensor bonded to the host structure. This cost-effective method can potentially reduce misinterpretation of SHM data for critical infrastructures.

  10. Background derivation and image flattening: getimages

    NASA Astrophysics Data System (ADS)

    Men'shchikov, A.

    2017-11-01

    Modern high-resolution images obtained with space observatories display extremely strong intensity variations across images on all spatial scales. Source extraction in such images with methods based on global thresholding may bring unacceptably large numbers of spurious sources in bright areas while failing to detect sources in low-background or low-noise areas. It would be highly beneficial to subtract background and equalize the levels of small-scale fluctuations in the images before extracting sources or filaments. This paper describes getimages, a new method of background derivation and image flattening. It is based on median filtering with sliding windows that correspond to a range of spatial scales from the observational beam size up to a maximum structure width Xλ. The latter is a single free parameter of getimages that can be evaluated manually from the observed image ℐλ. The median filtering algorithm provides a background image \\tilde{Bλ} for structures of all widths below Xλ. The same median filtering procedure applied to an image of standard deviations 𝓓λ derived from a background-subtracted image \\tilde{Sλ} results in a flattening image \\tilde{Fλ}. Finally, a flattened detection image I{λD} = \\tilde{Sλ}/\\tilde{Fλ} is computed, whose standard deviations are uniform outside sources and filaments. Detecting sources in such greatly simplified images results in much cleaner extractions that are more complete and reliable. As a bonus, getimages reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images.

  11. Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision

    PubMed Central

    Ender, Andreas; Mehl, Albert

    2014-01-01

    Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007

  12. M-estimator for the 3D symmetric Helmert coordinate transformation

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2018-01-01

    The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.

  13. Closure of the energy balance equation over bare soil during the formation and evaporation of non-rainfall water inputs

    NASA Astrophysics Data System (ADS)

    Florentin, Anat; Agam, Nurit

    2015-04-01

    The Negev desert is characterized by an arid climate (annual mean precipitation is 90 mm) with sea breeze carrying moisture from the Mediterranean Sea during the afternoon regularly. Non-rainfall water inputs (NRWIs) are thus of great importance to the hydrometeorology and the ecological functioning of the region. The small magnitude of NRWIs challenges attempts to quantify these processes. The aim of this research was to test commonly used micrometeorological methods to quantify the energy balance components during the deposition and evaporation of NRWIs. A fully equipped micrometeorological station was set up near the Blaustein Institutes for Desert Research of the Ben-Gurion University of the Negev (30o 51' 35.6" N; 34o 46' 24.8" E) during September-October 2014. Net-radiation was measured with a 4-way net radiometer, and soil heat flux was quantified by the calorimetric method in three replicates. Latent heat was measured using an eddy-covariance (EC) and compared to a micro-lysimeter (ML); sensible heat flux was measured with an EC and a surface layer scintillometer (SLS). Sensible heat fluxes measured by the EC and the SLS showed good agreement. EC latent heat fluxes were in good agreement with those derived by the ML. Nevertheless, derivation of latent heat flux from the SLS measurements through the energy balance equation showed a relatively large deviation from the directly measured latent heat flux. This deviation is likely attributed to measurement errors of the soil heat flux.

  14. Selecting boundary conditions in physiological strain analysis of the femur: Balanced loads, inertia relief method and follower load.

    PubMed

    Heyland, Mark; Trepczynski, Adam; Duda, Georg N; Zehn, Manfred; Schaser, Klaus-Dieter; Märdian, Sven

    2015-12-01

    Selection of boundary constraints may influence amount and distribution of loads. The purpose of this study is to analyze the potential of inertia relief and follower load to maintain the effects of musculoskeletal loads even under large deflections in patient specific finite element models of intact or fractured bone compared to empiric boundary constraints which have been shown to lead to physiological displacements and surface strains. The goal is to elucidate the use of boundary conditions in strain analyses of bones. Finite element models of the intact femur and a model of clinically relevant fracture stabilization by locking plate fixation were analyzed with normal walking loading conditions for different boundary conditions, specifically re-balanced loading, inertia relief and follower load. Peak principal cortex surface strains for different boundary conditions are consistent (maximum deviation 13.7%) except for inertia relief without force balancing (maximum deviation 108.4%). Influence of follower load on displacements increases with higher deflection in fracture model (from 3% to 7% for force balanced model). For load balanced models, follower load had only minor influence, though the effect increases strongly with higher deflection. Conventional constraints of fixed nodes in space should be carefully reconsidered because their type and position are challenging to justify and for their potential to introduce relevant non-physiological reaction forces. Inertia relief provides an alternative method which yields physiological strain results. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  16. Anomalies in the GRBs' distribution

    NASA Astrophysics Data System (ADS)

    Bagoly, Zsolt; Horvath, Istvan; Hakkila, Jon; Toth, Viktor

    2015-08-01

    Gamma-ray bursts (GRBs) are the most luminous objects known: they outshine their host galaxies making them ideal candidates for probing large-scale structure. Earlier, the angular distribution of different GRBs (long, intermediate and short) has been studied in detail with different methods and it has been found that the short and intermediate groups showed deviation from the full randomness at different levels (e.g. Vavrek, R., et al. 2008). However these result based only angular measurements of the BATSE experiment, without any spatial distance indicator involved.Currently we have more than 361 GRBs with measured precise position, optical afterglow and redshift, mainly due to the observations of the Swift mission. This sample is getting large enough that it its homogeneous and isotropic distribution a large scale can be checked. We have recently (Horvath, I. et al., 2014) identified a large clustering of gamma-ray bursts at redshift z ~ 2 in the general direction of the constellations of Hercules and Corona Borealis. This angular excess cannot be entirely attributed to known selection biases, making its existence due to chance unlikely. The scale on which the clustering occurs is disturbingly large, about 2-3 Gpc: the underlying distribution of matter suggested by this cluster is big enough to question standard assumptions about Universal homogeneity and isotropy.

  17. Towards Behavioral Reflexion Models

    NASA Technical Reports Server (NTRS)

    Ackermann, Christopher; Lindvall, Mikael; Cleaveland, Rance

    2009-01-01

    Software architecture has become essential in the struggle to manage today s increasingly large and complex systems. Software architecture views are created to capture important system characteristics on an abstract and, thus, comprehensible level. As the system is implemented and later maintained, it often deviates from the original design specification. Such deviations can have implication for the quality of the system, such as reliability, security, and maintainability. Software architecture compliance checking approaches, such as the reflexion model technique, have been proposed to address this issue by comparing the implementation to a model of the systems architecture design. However, architecture compliance checking approaches focus solely on structural characteristics and ignore behavioral conformance. This is especially an issue in Systems-of- Systems. Systems-of-Systems (SoS) are decompositions of large systems, into smaller systems for the sake of flexibility. Deviations of the implementation to its behavioral design often reduce the reliability of the entire SoS. An approach is needed that supports the reasoning about behavioral conformance on architecture level. In order to address this issue, we have developed an approach for comparing the implementation of a SoS to an architecture model of its behavioral design. The approach follows the idea of reflexion models and adopts it to support the compliance checking of behaviors. In this paper, we focus on sequencing properties as they play an important role in many SoS. Sequencing deviations potentially have a severe impact on the SoS correctness and qualities. The desired behavioral specification is defined in UML sequence diagram notation and behaviors are extracted from the SoS implementation. The behaviors are then mapped to the model of the desired behavior and the two are compared. Finally, a reflexion model is constructed that shows the deviations between behavioral design and implementation. This paper discusses the approach and shows how it can be applied to investigate reliability issues in SoS.

  18. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  19. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  20. Inter-Labeler and Intra-Labeler Variability of Condition Severity Classification Models Using Active and Passive Learning Methods

    PubMed Central

    Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert

    2018-01-01

    Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. Results The AL methods produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p = 0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275 to 0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers’ different models during the training phase, compared to the variance of the induced models’ AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods. The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p = 0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p = 0.29), as was the difference between the Combination_XA and Exploitation methods (p = 0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p = 0.014), but not when using any of the three AL methods. Conclusions The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group’s individual labelers. Finally, using the AL methods when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. PMID:28456512

Top