Sample records for large deviation approach

  1. Large deviation function for a driven underdamped particle in a periodic potential

    NASA Astrophysics Data System (ADS)

    Fischer, Lukas P.; Pietzonka, Patrick; Seifert, Udo

    2018-02-01

    Employing large deviation theory, we explore current fluctuations of underdamped Brownian motion for the paradigmatic example of a single particle in a one-dimensional periodic potential. Two different approaches to the large deviation function of the particle current are presented. First, we derive an explicit expression for the large deviation functional of the empirical phase space density, which replaces the level 2.5 functional used for overdamped dynamics. Using this approach, we obtain several bounds on the large deviation function of the particle current. We compare these to bounds for overdamped dynamics that have recently been derived, motivated by the thermodynamic uncertainty relation. Second, we provide a method to calculate the large deviation function via the cumulant generating function. We use this method to assess the tightness of the bounds in a numerical case study for a cosine potential.

  2. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  3. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  4. Dispersion in Rectangular Networks: Effective Diffusivity and Large-Deviation Rate Function

    NASA Astrophysics Data System (ADS)

    Tzella, Alexandra; Vanneste, Jacques

    2016-09-01

    The dispersion of a diffusive scalar in a fluid flowing through a network has many applications including to biological flows, porous media, water supply, and urban pollution. Motivated by this, we develop a large-deviation theory that predicts the evolution of the concentration of a scalar released in a rectangular network in the limit of large time t ≫1 . This theory provides an approximation for the concentration that remains valid for large distances from the center of mass, specifically for distances up to O (t ) and thus much beyond the O (t1 /2) range where a standard Gaussian approximation holds. A byproduct of the approach is a closed-form expression for the effective diffusivity tensor that governs this Gaussian approximation. Monte Carlo simulations of Brownian particles confirm the large-deviation results and demonstrate their effectiveness in describing the scalar distribution when t is only moderately large.

  5. The large deviation function for entropy production: the optimal trajectory and the role of fluctuations

    NASA Astrophysics Data System (ADS)

    Speck, Thomas; Engel, Andreas; Seifert, Udo

    2012-12-01

    We study the large deviation function for the entropy production rate in two driven one-dimensional systems: the asymmetric random walk on a discrete lattice and Brownian motion in a continuous periodic potential. We compare two approaches: using the Donsker-Varadhan theory and using the Freidlin-Wentzell theory. We show that the wings of the large deviation function are dominated by a single optimal trajectory: either in the forward direction (positive rate) or in the backward direction (negative rate). The joining of the two branches at zero entropy production implies a non-differentiability and thus the appearance of a ‘kink’. However, around zero entropy production, many trajectories contribute and thus the ‘kink’ is smeared out.

  6. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  7. [Conservative and surgical treatment of convergence excess].

    PubMed

    Ehrt, O

    2016-07-01

    Convergence excess is a common finding especially in pediatric strabismus. A detailed diagnostic approach has to start after full correction of any hyperopia measured in cycloplegia. It includes measurements of manifest and latent deviation at near and distance fixation, near deviation after relaxation of accommodation with addition of +3 dpt, assessment of binocular function with and without +3 dpt as well as the accommodation range. This diagnostic approach is important for the classification into three types of convergence excess, which require different therapeutic approaches: 1) hypo-accommodative convergence excess is treated with permanent bifocal glasses, 2) norm-accommodative patients should be treated with bifocals which can be weaned over years, especially in patients with good stereopsis and 3) non-accommodative convergence excess and patients with large distance deviations need a surgical approach. The most effective operations include those which reduce the muscle torque, e. g. bimedial Faden operations or Y‑splitting of the medial rectus muscles.

  8. Towards Behavioral Reflexion Models

    NASA Technical Reports Server (NTRS)

    Ackermann, Christopher; Lindvall, Mikael; Cleaveland, Rance

    2009-01-01

    Software architecture has become essential in the struggle to manage today s increasingly large and complex systems. Software architecture views are created to capture important system characteristics on an abstract and, thus, comprehensible level. As the system is implemented and later maintained, it often deviates from the original design specification. Such deviations can have implication for the quality of the system, such as reliability, security, and maintainability. Software architecture compliance checking approaches, such as the reflexion model technique, have been proposed to address this issue by comparing the implementation to a model of the systems architecture design. However, architecture compliance checking approaches focus solely on structural characteristics and ignore behavioral conformance. This is especially an issue in Systems-of- Systems. Systems-of-Systems (SoS) are decompositions of large systems, into smaller systems for the sake of flexibility. Deviations of the implementation to its behavioral design often reduce the reliability of the entire SoS. An approach is needed that supports the reasoning about behavioral conformance on architecture level. In order to address this issue, we have developed an approach for comparing the implementation of a SoS to an architecture model of its behavioral design. The approach follows the idea of reflexion models and adopts it to support the compliance checking of behaviors. In this paper, we focus on sequencing properties as they play an important role in many SoS. Sequencing deviations potentially have a severe impact on the SoS correctness and qualities. The desired behavioral specification is defined in UML sequence diagram notation and behaviors are extracted from the SoS implementation. The behaviors are then mapped to the model of the desired behavior and the two are compared. Finally, a reflexion model is constructed that shows the deviations between behavioral design and implementation. This paper discusses the approach and shows how it can be applied to investigate reliability issues in SoS.

  9. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    PubMed

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  10. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    NASA Astrophysics Data System (ADS)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  11. Large deviations and mixing for dissipative PDEs with unbounded random kicks

    NASA Astrophysics Data System (ADS)

    Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.

    2018-02-01

    We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.

  12. A New Control Paradigm for Stochastic Differential Equations

    NASA Astrophysics Data System (ADS)

    Schmid, Matthias J. A.

    This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.

  13. Large deviation approach to the generalized random energy model

    NASA Astrophysics Data System (ADS)

    Dorlas, T. C.; Dukes, W. M. B.

    2002-05-01

    The generalized random energy model is a generalization of the random energy model introduced by Derrida to mimic the ultrametric structure of the Parisi solution of the Sherrington-Kirkpatrick model of a spin glass. It was solved exactly in two special cases by Derrida and Gardner. A complete solution for the thermodynamics in the general case was given by Capocaccia et al. Here we use large deviation theory to analyse the model in a very straightforward way. We also show that the variational expression for the free energy can be evaluated easily using the Cauchy-Schwarz inequality.

  14. Large-deviation theory for diluted Wishart random matrices

    NASA Astrophysics Data System (ADS)

    Castillo, Isaac Pérez; Metz, Fernando L.

    2018-03-01

    Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.

  15. Lower Current Large Deviations for Zero-Range Processes on a Ring

    NASA Astrophysics Data System (ADS)

    Chleboun, Paul; Grosskinsky, Stefan; Pizzoferrato, Andrea

    2017-04-01

    We study lower large deviations for the current of totally asymmetric zero-range processes on a ring with concave current-density relation. We use an approach by Jensen and Varadhan which has previously been applied to exclusion processes, to realize current fluctuations by travelling wave density profiles corresponding to non-entropic weak solutions of the hyperbolic scaling limit of the process. We further establish a dynamic transition, where large deviations of the current below a certain value are no longer typically attained by non-entropic weak solutions, but by condensed profiles, where a non-zero fraction of all the particles accumulates on a single fixed lattice site. This leads to a general characterization of the rate function, which is illustrated by providing detailed results for four generic examples of jump rates, including constant rates, decreasing rates, unbounded sublinear rates and asymptotically linear rates. Our results on the dynamic transition are supported by numerical simulations using a cloning algorithm.

  16. Deviations from Newton's law in supersymmetric large extra dimensions

    NASA Astrophysics Data System (ADS)

    Callin, P.; Burgess, C. P.

    2006-09-01

    Deviations from Newton's inverse-squared law at the micron length scale are smoking-gun signals for models containing supersymmetric large extra dimensions (SLEDs), which have been proposed as approaches for resolving the cosmological constant problem. Just like their non-supersymmetric counterparts, SLED models predict gravity to deviate from the inverse-square law because of the advent of new dimensions at sub-millimeter scales. However SLED models differ from their non-supersymmetric counterparts in three important ways: (i) the size of the extra dimensions is fixed by the observed value of the dark energy density, making it impossible to shorten the range over which new deviations from Newton's law must be seen; (ii) supersymmetry predicts there to be more fields in the extra dimensions than just gravity, implying different types of couplings to matter and the possibility of repulsive as well as attractive interactions; and (iii) the same mechanism which is purported to keep the cosmological constant naturally small also keeps the extra-dimensional moduli effectively massless, leading to deviations from general relativity in the far infrared of the scalar-tensor form. We here explore the deviations from Newton's law which are predicted over micron distances, and show the ways in which they differ and resemble those in the non-supersymmetric case.

  17. Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image

    NASA Astrophysics Data System (ADS)

    Demir, N.; Kaynarca, M.; Oy, S.

    2016-06-01

    Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, Scott F.; Linder, Eric V.; Lawrence Berkeley National Laboratory, Berkeley, California

    Deviations from general relativity, such as could be responsible for the cosmic acceleration, would influence the growth of large-scale structure and the deflection of light by that structure. We clarify the relations between several different model-independent approaches to deviations from general relativity appearing in the literature, devising a translation table. We examine current constraints on such deviations, using weak gravitational lensing data of the CFHTLS and COSMOS surveys, cosmic microwave background radiation data of WMAP5, and supernova distance data of Union2. A Markov chain Monte Carlo likelihood analysis of the parameters over various redshift ranges yields consistency with general relativitymore » at the 95% confidence level.« less

  19. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  20. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  1. Orientational alignment in cavity quantum electrodynamics

    NASA Astrophysics Data System (ADS)

    Keeling, Jonathan; Kirton, Peter G.

    2018-05-01

    We consider the orientational alignment of dipoles due to strong matter-light coupling for a nonvanishing density of excitations. We compare various approaches to this problem in the limit of large numbers of emitters and show that direct Monte Carlo integration, mean-field theory, and large deviation methods match exactly in this limit. All three results show that orientational alignment develops in the presence of a macroscopically occupied polariton mode and that the dipoles asymptotically approach perfect alignment in the limit of high density or low temperature.

  2. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  3. A large deviations principle for stochastic flows of viscous fluids

    NASA Astrophysics Data System (ADS)

    Cipriano, Fernanda; Costa, Tiago

    2018-04-01

    We study the well-posedness of a stochastic differential equation on the two dimensional torus T2, driven by an infinite dimensional Wiener process with drift in the Sobolev space L2 (0 , T ;H1 (T2)) . The solution corresponds to a stochastic Lagrangian flow in the sense of DiPerna Lions. By taking into account that the motion of a viscous incompressible fluid on the torus can be described through a suitable stochastic differential equation of the previous type, we study the inviscid limit. By establishing a large deviations principle, we show that, as the viscosity goes to zero, the Lagrangian stochastic Navier-Stokes flow approaches the Euler deterministic Lagrangian flow with an exponential rate function.

  4. 0–0 Energies Using Hybrid Schemes: Benchmarks of TD-DFT, CIS(D), ADC(2), CC2, and BSE/GW formalisms for 80 Real-Life Compounds

    PubMed Central

    2015-01-01

    The 0–0 energies of 80 medium and large molecules have been computed with a large panel of theoretical formalisms. We have used an approach computationally tractable for large molecules, that is, the structural and vibrational parameters are obtained with TD-DFT, the solvent effects are accounted for with the PCM model, whereas the total and transition energies have been determined with TD-DFT and with five wave function approaches accounting for contributions from double excitations, namely, CIS(D), ADC(2), CC2, SCS-CC2, and SOS-CC2, as well as Green’s function based BSE/GW approach. Atomic basis sets including diffuse functions have been systematically applied, and several variations of the PCM have been evaluated. Using solvent corrections obtained with corrected linear-response approach, we found that three schemes, namely, ADC(2), CC2, and BSE/GW allow one to reach a mean absolute deviation smaller than 0.15 eV compared to the measurements, the two former yielding slightly better correlation with experiments than the latter. CIS(D), SCS-CC2, and SOS-CC2 provide significantly larger deviations, though the latter approach delivers highly consistent transition energies. In addition, we show that (i) ADC(2) and CC2 values are extremely close to each other but for systems absorbing at low energies; (ii) the linear-response PCM scheme tends to overestimate solvation effects; and that (iii) the average impact of nonequilibrium correction on 0–0 energies is negligible. PMID:26574326

  5. Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence

    NASA Astrophysics Data System (ADS)

    Laurie, J.; Bouchet, F.; Zaboronski, O.

    2012-12-01

    We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.

  6. Rare behavior of growth processes via umbrella sampling of trajectories

    NASA Astrophysics Data System (ADS)

    Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen

    2018-03-01

    We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.

  7. Entanglement transitions induced by large deviations

    NASA Astrophysics Data System (ADS)

    Bhosale, Udaysinh T.

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B , is computed analytically using a Coulomb gas method. It is shown that this probability, for large N , goes as exp[-β N2Φ (ζ ) ] , where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ (ζ ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A , using the properties of the density matrix's partial transpose ρ12Γ. The density of states of ρ12Γ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ . Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  8. Entanglement transitions induced by large deviations.

    PubMed

    Bhosale, Udaysinh T

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B, is computed analytically using a Coulomb gas method. It is shown that this probability, for large N, goes as exp[-βN^{2}Φ(ζ)], where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ(ζ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A, using the properties of the density matrix's partial transpose ρ_{12}^{Γ}. The density of states of ρ_{12}^{Γ} is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ. Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  9. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  10. Approaching sub-50 nanoradian measurements by reducing the saw-tooth deviation of the autocollimator in the Nano-Optic-Measuring Machine

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Geckeler, Ralf D.; Just, Andreas; Idir, Mourad; Wu, Xuehui

    2015-06-01

    Since the development of the Nano-Optic-Measuring Machine (NOM), the accuracy of measuring the profile of an optical surface has been enhanced to the 100-nrad rms level or better. However, to update the accuracy of the NOM system to sub-50 nrad rms, the large saw-tooth deviation (269 nrad rms) of an existing electronic autocollimator, the Elcomat 3000/8, must be resolved. We carried out simulations to assess the saw-tooth-like deviation. We developed a method for setting readings to reduce the deviation to sub-50 nrad rms, suitable for testing plane mirrors. With this method, we found that all the tests conducted in a slowly rising section of the saw-tooth show a small deviation of 28.8 to <40 nrad rms. We also developed a dense-measurement method and an integer-period method to lower the saw-tooth deviation during tests of sphere mirrors. Further research is necessary for formulating a precise test for a spherical mirror. We present a series of test results from our experiments that verify the value of the improvements we made.

  11. Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws

    NASA Astrophysics Data System (ADS)

    Barré, J.; Bernardin, C.; Chetrite, R.

    2018-02-01

    We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  13. Second-order (2 +1 ) -dimensional anisotropic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bazow, Dennis; Heinz, Ulrich; Strickland, Michael

    2014-11-01

    We present a complete formulation of second-order (2 +1 ) -dimensional anisotropic hydrodynamics. The resulting framework generalizes leading-order anisotropic hydrodynamics by allowing for deviations of the one-particle distribution function from the spheroidal form assumed at leading order. We derive complete second-order equations of motion for the additional terms in the macroscopic currents generated by these deviations from their kinetic definition using a Grad-Israel-Stewart 14-moment ansatz. The result is a set of coupled partial differential equations for the momentum-space anisotropy parameter, effective temperature, the transverse components of the fluid four-velocity, and the viscous tensor components generated by deviations of the distribution from spheroidal form. We then perform a quantitative test of our approach by applying it to the case of one-dimensional boost-invariant expansion in the relaxation time approximation (RTA) in which case it is possible to numerically solve the Boltzmann equation exactly. We demonstrate that the second-order anisotropic hydrodynamics approach provides an excellent approximation to the exact (0+1)-dimensional RTA solution for both small and large values of the shear viscosity.

  14. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  15. Radar sea reflection for low-e targets

    NASA Astrophysics Data System (ADS)

    Chow, Winston C.; Groves, Gordon W.

    1998-09-01

    Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.

  16. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  17. Inertial Manifold and Large Deviations Approach to Reduced PDE Dynamics

    NASA Astrophysics Data System (ADS)

    Cardin, Franco; Favretti, Marco; Lovison, Alberto

    2017-09-01

    In this paper a certain type of reaction-diffusion equation—similar to the Allen-Cahn equation—is the starting point for setting up a genuine thermodynamic reduction i.e. involving a finite number of parameters or collective variables of the initial system. We firstly operate a finite Lyapunov-Schmidt reduction of the cited reaction-diffusion equation when reformulated as a variational problem. In this way we gain a finite-dimensional ODE description of the initial system which preserves the gradient structure of the original one and that is exact for the static case and only approximate for the dynamic case. Our main concern is how to deal with this approximate reduced description of the initial PDE. To start with, we note that our approximate reduced ODE is similar to the approximate inertial manifold introduced by Temam and coworkers for Navier-Stokes equations. As a second approach, we take into account the uncertainty (loss of information) introduced with the above mentioned approximate reduction by considering the stochastic version of the ODE. We study this reduced stochastic system using classical tools from large deviations, viscosity solutions and weak KAM Hamilton-Jacobi theory. In the last part we suggest a possible use of a result of our approach in the comprehensive treatment non equilibrium thermodynamics given by Macroscopic Fluctuation Theory.

  18. Robustness and cognition in stabilization problem of dynamical systems based on asymptotic methods

    NASA Astrophysics Data System (ADS)

    Dubovik, S. A.; Kabanov, A. A.

    2017-01-01

    The problem of synthesis of stabilizing systems based on principles of cognitive (logical-dynamic) control for mobile objects used under uncertain conditions is considered. This direction in control theory is based on the principles of guaranteeing robust synthesis focused on worst-case scenarios of the controlled process. The guaranteeing approach is able to provide functioning of the system with the required quality and reliability only at sufficiently low disturbances and in the absence of large deviations from some regular features of the controlled process. The main tool for the analysis of large deviations and prediction of critical states here is the action functional. After the forecast is built, the choice of anti-crisis control is the supervisory control problem that optimizes the control system in a normal mode and prevents escape of the controlled process in critical states. An essential aspect of the approach presented here is the presence of a two-level (logical-dynamic) control: the input data are used not only for generating of synthesized feedback (local robust synthesis) in advance (off-line), but also to make decisions about the current (on-line) quality of stabilization in the global sense. An example of using the presented approach for the problem of development of the ship tilting prediction system is considered.

  19. Probability evolution method for exit location distribution

    NASA Astrophysics Data System (ADS)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  20. From the Law of Large Numbers to Large Deviation Theory in Statistical Physics: An Introduction

    NASA Astrophysics Data System (ADS)

    Cecconi, Fabio; Cencini, Massimo; Puglisi, Andrea; Vergni, Davide; Vulpiani, Angelo

    This contribution aims at introducing the topics of this book. We start with a brief historical excursion on the developments from the law of large numbers to the central limit theorem and large deviations theory. The same topics are then presented using the language of probability theory. Finally, some applications of large deviations theory in physics are briefly discussed through examples taken from statistical mechanics, dynamical and disordered systems.

  1. Current Fluctuations in Stochastic Lattice Gases

    NASA Astrophysics Data System (ADS)

    Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.

    2005-01-01

    We study current fluctuations in lattice gases in the macroscopic limit extending the dynamic approach for density fluctuations developed in previous articles. More precisely, we establish a large deviation theory for the space-time fluctuations of the empirical current which include the previous results. We then estimate the probability of a fluctuation of the average current over a large time interval. It turns out that recent results by Bodineau and Derrida [Phys. Rev. Lett.922004180601] in certain cases underestimate this probability due to the occurrence of dynamical phase transitions.

  2. High storage capacity in the Hopfield model with auto-interactions—stability analysis

    NASA Astrophysics Data System (ADS)

    Rocchi, Jacopo; Saad, David; Tantari, Daniele

    2017-11-01

    Recent studies point to the potential storage of a large number of patterns in the celebrated Hopfield associative memory model, well beyond the limits obtained previously. We investigate the properties of new fixed points to discover that they exhibit instabilities for small perturbations and are therefore of limited value as associative memories. Moreover, a large deviations approach also shows that errors introduced to the original patterns induce additional errors and increased corruption with respect to the stored patterns.

  3. Finite-key analysis for measurement-device-independent quantum key distribution.

    PubMed

    Curty, Marcos; Xu, Feihu; Cui, Wei; Lim, Charles Ci Wen; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2014-04-29

    Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach--measurement-device-independent quantum key distribution--has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time frame of signal transmission.

  4. Investigation of compositional segregation during unidirectional solidification of solid solution semiconducting alloys

    NASA Technical Reports Server (NTRS)

    Wang, J. C.

    1982-01-01

    Compositional segregation of solid solution semiconducting alloys in the radial direction during unidirectional solidification was investigated by calculating the effect of a curved solid liquid interface on solute concentration at the interface on the solid. The formulation is similar to that given by Coriell, Boisvert, Rehm, and Sekerka except that a more realistic cylindrical coordinate system which is moving with the interface is used. Analytical results were obtained for very small and very large values of beta with beta = VR/D, where V is the velocity of solidification, R the radius of the specimen, and D the diffusivity of solute in the liquid. For both very small and very large beta, the solute concentration at the interface in the solid C(si) approaches C(o) (original solute concentration) i.e., the deviation is minimal. The maximum deviation of C(si) from C(o) occurs for some intermediate value of beta.

  5. Large deviations in the presence of cooperativity and slow dynamics

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-06-01

    We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.

  6. Efficiency and large deviations in time-asymmetric stochastic heat engines

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...

    2014-10-24

    In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less

  7. Statistics of velocity fluctuations of Geldart A particles in a circulating fluidized bed riser

    DOE PAGES

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    2017-11-21

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  8. A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan

    NASA Astrophysics Data System (ADS)

    Bhongade, A. S.; Khodke, P. M.

    2014-04-01

    Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.

  9. A Priori Subgrid Analysis of Temporal Mixing Layers with Evaporating Droplets

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    1999-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using three sets of results from a Direct Numerical Simulation (DNS) database, with Reynolds numbers (based on initial vorticity thickness) as large as 600 and with droplet mass loadings as large as 0.5. In the DNS, the gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. The Large Eddy Simulation (LES) equations corresponding to the DNS are first derived, and key assumptions in deriving them are first confirmed by computing the terms using the DNS database. Since LES of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be the sum of the filtered variables and a correction based on the filtered standard deviation; this correction is then computed from the Subgrid Scale (SGS) standard deviation. This model predicts the unfiltered variables at droplet locations considerably better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: the Smagorinsky approach, the Gradient model and the Scale-Similarity formulation. When the proportionality constant inherent in the SGS models is properly calculated, the Gradient and Scale-Similarity methods give results in excellent agreement with the DNS.

  10. Transport Coefficients from Large Deviation Functions

    NASA Astrophysics Data System (ADS)

    Gao, Chloe; Limmer, David

    2017-10-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  11. Importance sampling large deviations in nonequilibrium steady states. I.

    PubMed

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  12. Importance sampling large deviations in nonequilibrium steady states. I

    NASA Astrophysics Data System (ADS)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  13. iATTRACT: simultaneous global and local interface optimization for protein-protein docking refinement.

    PubMed

    Schindler, Christina E M; de Vries, Sjoerd J; Zacharias, Martin

    2015-02-01

    Protein-protein interactions are abundant in the cell but to date structural data for a large number of complexes is lacking. Computational docking methods can complement experiments by providing structural models of complexes based on structures of the individual partners. A major caveat for docking success is accounting for protein flexibility. Especially, interface residues undergo significant conformational changes upon binding. This limits the performance of docking methods that keep partner structures rigid or allow limited flexibility. A new docking refinement approach, iATTRACT, has been developed which combines simultaneous full interface flexibility and rigid body optimizations during docking energy minimization. It employs an atomistic molecular mechanics force field for intermolecular interface interactions and a structure-based force field for intramolecular contributions. The approach was systematically evaluated on a large protein-protein docking benchmark, starting from an enriched decoy set of rigidly docked protein-protein complexes deviating by up to 15 Å from the native structure at the interface. Large improvements in sampling and slight but significant improvements in scoring/discrimination of near native docking solutions were observed. Complexes with initial deviations at the interface of up to 5.5 Å were refined to significantly better agreement with the native structure. Improvements in the fraction of native contacts were especially favorable, yielding increases of up to 70%. © 2014 Wiley Periodicals, Inc.

  14. On the variability of the Priestley-Taylor coefficient over water bodies

    NASA Astrophysics Data System (ADS)

    Assouline, Shmuel; Li, Dan; Tyler, Scott; Tanny, Josef; Cohen, Shabtai; Bou-Zeid, Elie; Parlange, Marc; Katul, Gabriel G.

    2016-01-01

    Deviations in the Priestley-Taylor (PT) coefficient αPT from its accepted 1.26 value are analyzed over large lakes, reservoirs, and wetlands where stomatal or soil controls are minimal or absent. The data sets feature wide variations in water body sizes and climatic conditions. Neither surface temperature nor sensible heat flux variations alone, which proved successful in characterizing αPT variations over some crops, explain measured deviations in αPT over water. It is shown that the relative transport efficiency of turbulent heat and water vapor is key to explaining variations in αPT over water surfaces, thereby offering a new perspective over the concept of minimal advection or entrainment introduced by PT. Methods that allow the determination of αPT based on low-frequency sampling (i.e., 0.1 Hz) are then developed and tested, which are usable with standard meteorological sensors that filter some but not all turbulent fluctuations. Using approximations to the Gram determinant inequality, the relative transport efficiency is derived as a function of the correlation coefficient between temperature and water vapor concentration fluctuations (RTq). The proposed approach reasonably explains the measured deviations from the conventional αPT = 1.26 value even when RTq is determined from air temperature and water vapor concentration time series that are Gaussian-filtered and subsampled to a cutoff frequency of 0.1 Hz. Because over water bodies, RTq deviations from unity are often associated with advection and/or entrainment, linkages between αPT and RTq offer both a diagnostic approach to assess their significance and a prognostic approach to correct the 1.26 value when using routine meteorological measurements of temperature and humidity.

  15. Point-based and model-based geolocation analysis of airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Sefercik, Umut Gunes; Buyuksalih, Gurcan; Jacobsen, Karsten; Alkan, Mehmet

    2017-01-01

    Airborne laser scanning (ALS) is one of the most effective remote sensing technologies providing precise three-dimensional (3-D) dense point clouds. A large-size ALS digital surface model (DSM) covering the whole Istanbul province was analyzed by point-based and model-based comprehensive statistical approaches. Point-based analysis was performed using checkpoints on flat areas. Model-based approaches were implemented in two steps as strip to strip comparing overlapping ALS DSMs individually in three subareas and comparing the merged ALS DSMs with terrestrial laser scanning (TLS) DSMs in four other subareas. In the model-based approach, the standard deviation of height and normalized median absolute deviation were used as the accuracy indicators combined with the dependency of terrain inclination. The results demonstrate that terrain roughness has a strong impact on the vertical accuracy of ALS DSMs. From the relative horizontal shifts determined and partially improved by merging the overlapping strips and comparison of the ALS, and the TLS, data were found not to be negligible. The analysis of ALS DSM in relation to TLS DSM allowed us to determine the characteristics of the DSM in detail.

  16. Improving the distinguishable cluster results: spin-component scaling

    NASA Astrophysics Data System (ADS)

    Kats, Daniel

    2018-06-01

    The spin-component scaling is employed in the energy evaluation to improve the distinguishable cluster approach. SCS-DCSD reaction energies reproduce reference values with a root-mean-squared deviation well below 1 kcal/mol, the interaction energies are three to five times more accurate than DCSD, and molecular systems with a large amount of static electron correlation are still described reasonably well. SCS-DCSD represents a pragmatic approach to achieve chemical accuracy with a simple method without triples, which can also be applied to multi-configurational molecular systems.

  17. Large Deviations for Nonlocal Stochastic Neural Fields

    PubMed Central

    2014-01-01

    We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297

  18. Predictor symbology in computer-generated pictorial displays

    NASA Technical Reports Server (NTRS)

    Grunwald, A. J.

    1981-01-01

    The display under investigation, is a tunnel display for the four-dimensional commercial aircraft approach-to-landing under instrument flight rules. It is investigated whether more complex predictive information such as a three-dimensional perspective vehicle symbol, predicting the future vehicle position as well as future vehicle attitude angles, contributes to a better system response, and suitable predictor laws for the predictor motions, are formulated. Methods for utilizing the predictor symbol in controlling the forward velocity of the aircraft in four-dimensional approaches, are investigated. The simulator tests show, that the complex perspective vehicle symbol yields improved damping in the lateral response as compared to a flat two-dimensional predictor cross, but yields generally larger vertical deviations. Methods of using the predictor symbol in controlling the forward velocity of the vehicle are shown to be effective. The tunnel display with superimposed perspective vehicle symbol yields very satisfactory results and pilot acceptance in the lateral control but is found to be unsatisfactory in the vertical control, as a result of too large vertical path-angle deviations.

  19. Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.

    2015-01-01

    This paper intends to help establish fidelity criteria to accompany the simulator motion system diagnostic test specified by the International Civil Aviation Organization. Twelve air- line transport pilots flew three tasks in the NASA Vertical Motion Simulator under four different motion conditions. The experiment used three different hexapod motion configurations, each with a different tradeoff between motion filter gain and break frequency, and one large motion configuration that utilized as much of the simulator's motion space as possible. The motion condition significantly affected: 1) pilot motion fidelity ratings, and sink rate and lateral deviation at touchdown for the approach and landing task, 2) pilot motion fidelity ratings, roll deviations, maximum pitch rate, and number of stick shaker activations in the stall task, and 3) heading deviation after an engine failure in the takeoff task. Significant differences in pilot-vehicle performance were used to define initial objective motion cueing criteria boundaries. These initial fidelity boundaries show promise but need refinement.

  20. Cumulants and large deviations of the current through non-equilibrium steady states

    NASA Astrophysics Data System (ADS)

    Bodineau, Thierry; Derrida, Bernard

    2007-06-01

    Using a generalisation of detailed balance for systems maintained out of equilibrium by contact with 2 reservoirs at unequal temperatures or at unequal densities, one can recover the fluctuation theorem for the large deviation function of the current. For large diffusive systems, we show how the large deviation function of the current can be computed using a simple additivity principle. The validity of this additivity principle and the occurrence of phase transitions are discussed in the framework of the macroscopic fluctuation theory. To cite this article: T. Bodineau, B. Derrida, C. R. Physique 8 (2007).

  1. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe

    2016-07-28

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less

  2. Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks

    DTIC Science & Technology

    2016-04-01

    Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard

  3. Rogue waves and large deviations in deep sea.

    PubMed

    Dematteis, Giovanni; Grafke, Tobias; Vanden-Eijnden, Eric

    2018-01-30

    The appearance of rogue waves in deep sea is investigated by using the modified nonlinear Schrödinger (MNLS) equation in one spatial dimension with random initial conditions that are assumed to be normally distributed, with a spectrum approximating realistic conditions of a unidirectional sea state. It is shown that one can use the incomplete information contained in this spectrum as prior and supplement this information with the MNLS dynamics to reliably estimate the probability distribution of the sea surface elevation far in the tail at later times. Our results indicate that rogue waves occur when the system hits unlikely pockets of wave configurations that trigger large disturbances of the surface height. The rogue wave precursors in these pockets are wave patterns of regular height, but with a very specific shape that is identified explicitly, thereby allowing for early detection. The method proposed here combines Monte Carlo sampling with tools from large deviations theory that reduce the calculation of the most likely rogue wave precursors to an optimization problem that can be solved efficiently. This approach is transferable to other problems in which the system's governing equations contain random initial conditions and/or parameters.

  4. Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels

    NASA Astrophysics Data System (ADS)

    Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.

    2017-05-01

    This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.

  5. Effect of stress on energy flux deviation of ultrasonic waves in GR/EP composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1990-01-01

    Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis (fiber axis) and the x1 for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers a new nondestructive technique of evaluating stress in composites.

  6. Online Deviation Detection for Medical Processes

    PubMed Central

    Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.

    2014-01-01

    Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343

  7. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  8. Large deviation theory for the kinetics and energetics of turnover of enzyme catalysis in a chemiostatic flow.

    PubMed

    Das, Biswajit; Gangopadhyay, Gautam

    2018-05-07

    In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.

  9. Large deviation theory for the kinetics and energetics of turnover of enzyme catalysis in a chemiostatic flow

    NASA Astrophysics Data System (ADS)

    Das, Biswajit; Gangopadhyay, Gautam

    2018-05-01

    In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.

  10. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE PAGES

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  11. Variability of pesticide detections and concentrations in field replicate water samples collected for the National Water-Quality Assessment Program, 1992-97

    USGS Publications Warehouse

    Martin, Jeffrey D.

    2002-01-01

    Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.

  12. On the Distribution of Protein Refractive Index Increments

    PubMed Central

    Zhao, Huaying; Brown, Patrick H.; Schuck, Peter

    2011-01-01

    The protein refractive index increment, dn/dc, is an important parameter underlying the concentration determination and the biophysical characterization of proteins and protein complexes in many techniques. In this study, we examine the widely used assumption that most proteins have dn/dc values in a very narrow range, and reappraise the prediction of dn/dc of unmodified proteins based on their amino acid composition. Applying this approach in large scale to the entire set of known and predicted human proteins, we obtain, for the first time, to our knowledge, an estimate of the full distribution of protein dn/dc values. The distribution is close to Gaussian with a mean of 0.190 ml/g (for unmodified proteins at 589 nm) and a standard deviation of 0.003 ml/g. However, small proteins <10 kDa exhibit a larger spread, and almost 3000 proteins have values deviating by more than two standard deviations from the mean. Due to the widespread availability of protein sequences and the potential for outliers, the compositional prediction should be convenient and provide greater accuracy than an average consensus value for all proteins. We discuss how this approach should be particularly valuable for certain protein classes where a high dn/dc is coincidental to structural features, or may be functionally relevant such as in proteins of the eye. PMID:21539801

  13. On the distribution of protein refractive index increments.

    PubMed

    Zhao, Huaying; Brown, Patrick H; Schuck, Peter

    2011-05-04

    The protein refractive index increment, dn/dc, is an important parameter underlying the concentration determination and the biophysical characterization of proteins and protein complexes in many techniques. In this study, we examine the widely used assumption that most proteins have dn/dc values in a very narrow range, and reappraise the prediction of dn/dc of unmodified proteins based on their amino acid composition. Applying this approach in large scale to the entire set of known and predicted human proteins, we obtain, for the first time, to our knowledge, an estimate of the full distribution of protein dn/dc values. The distribution is close to Gaussian with a mean of 0.190 ml/g (for unmodified proteins at 589 nm) and a standard deviation of 0.003 ml/g. However, small proteins <10 kDa exhibit a larger spread, and almost 3000 proteins have values deviating by more than two standard deviations from the mean. Due to the widespread availability of protein sequences and the potential for outliers, the compositional prediction should be convenient and provide greater accuracy than an average consensus value for all proteins. We discuss how this approach should be particularly valuable for certain protein classes where a high dn/dc is coincidental to structural features, or may be functionally relevant such as in proteins of the eye. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  14. Investigating the Effects of Magnetic Variations on Inertial/Magnetic Orientation Sensors

    DTIC Science & Technology

    2007-09-01

    caused by test objects, a track was constructed using nonferrous materials and set so that the orientation of an inertial/magnetic sensor module...states ◆ metal filing cabinet ◆ mobile robot, unpowered, powered, and motor engaged. The MicroStrain 3DM-G sensor module is factory calibrated and...triad of the sensor module approached a large metal filing cabinet. The deviations for this test object are the largest of any observed in the

  15. From Large Deviations to Semidistances of Transport and Mixing: Coherence Analysis for Finite Lagrangian Data

    NASA Astrophysics Data System (ADS)

    Koltai, Péter; Renger, D. R. Michiel

    2018-06-01

    One way to analyze complicated non-autonomous flows is through trying to understand their transport behavior. In a quantitative, set-oriented approach to transport and mixing, finite time coherent sets play an important role. These are time-parametrized families of sets with unlikely transport to and from their surroundings under small or vanishing random perturbations of the dynamics. Here we propose, as a measure of transport and mixing for purely advective (i.e., deterministic) flows, (semi)distances that arise under vanishing perturbations in the sense of large deviations. Analogously, for given finite Lagrangian trajectory data we derive a discrete-time-and-space semidistance that comes from the "best" approximation of the randomly perturbed process conditioned on this limited information of the deterministic flow. It can be computed as shortest path in a graph with time-dependent weights. Furthermore, we argue that coherent sets are regions of maximal farness in terms of transport and mixing, and hence they occur as extremal regions on a spanning structure of the state space under this semidistance—in fact, under any distance measure arising from the physical notion of transport. Based on this notion, we develop a tool to analyze the state space (or the finite trajectory data at hand) and identify coherent regions. We validate our approach on idealized prototypical examples and well-studied standard cases.

  16. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  17. Big data driven cycle time parallel prediction for production planning in wafer manufacturing

    NASA Astrophysics Data System (ADS)

    Wang, Junliang; Yang, Jungang; Zhang, Jie; Wang, Xiaoxi; Zhang, Wenjun Chris

    2018-07-01

    Cycle time forecasting (CTF) is one of the most crucial issues for production planning to keep high delivery reliability in semiconductor wafer fabrication systems (SWFS). This paper proposes a novel data-intensive cycle time (CT) prediction system with parallel computing to rapidly forecast the CT of wafer lots with large datasets. First, a density peak based radial basis function network (DP-RBFN) is designed to forecast the CT with the diverse and agglomerative CT data. Second, the network learning method based on a clustering technique is proposed to determine the density peak. Third, a parallel computing approach for network training is proposed in order to speed up the training process with large scaled CT data. Finally, an experiment with respect to SWFS is presented, which demonstrates that the proposed CTF system can not only speed up the training process of the model but also outperform the radial basis function network, the back-propagation-network and multivariate regression methodology based CTF methods in terms of the mean absolute deviation and standard deviation.

  18. First-Principles Momentum Dependent Local Ansatz Approach to the Momentum Distribution Function in Iron-Group Transition Metals

    NASA Astrophysics Data System (ADS)

    Kakehashi, Yoshiro; Chandra, Sumal

    2017-03-01

    The momentum distribution function (MDF) bands of iron-group transition metals from Sc to Cu have been investigated on the basis of the first-principles momentum dependent local ansatz wavefunction method. It is found that the MDF for d electrons show a strong momentum dependence and a large deviation from the Fermi-Dirac distribution function along high-symmetry lines of the first Brillouin zone, while the sp electrons behave as independent electrons. In particular, the deviation in bcc Fe (fcc Ni) is shown to be enhanced by the narrow eg (t2g) bands with flat dispersion in the vicinity of the Fermi level. Mass enhancement factors (MEF) calculated from the jump on the Fermi surface are also shown to be momentum dependent. Large mass enhancements of Mn and Fe are found to be caused by spin fluctuations due to d electrons, while that for Ni is mainly caused by charge fluctuations. Calculated MEF are consistent with electronic specific heat data as well as recent angle resolved photoemission spectroscopy data.

  19. Back in the saddle: large-deviation statistics of the cosmic log-density field

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.

    2016-08-01

    We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.

  20. Fluctuation theorems for discrete kinetic models of molecular motors

    NASA Astrophysics Data System (ADS)

    Faggionato, Alessandra; Silvestri, Vittoria

    2017-04-01

    Motivated by discrete kinetic models for non-cooperative molecular motors on periodic tracks, we consider random walks (also not Markov) on quasi one dimensional (1d) lattices, obtained by gluing several copies of a fundamental graph in a linear fashion. We show that, for a suitable class of quasi-1d lattices, the large deviation rate function associated to the position of the walker satisfies a Gallavotti-Cohen symmetry for any choice of the dynamical parameters defining the stochastic walk. This class includes the linear model considered in Lacoste et al (2008 Phys. Rev. E 78 011915). We also derive fluctuation theorems for the time-integrated cycle currents and discuss how the matrix approach of Lacoste et al (2008 Phys. Rev. E 78 011915) can be extended to derive the above Gallavotti-Cohen symmetry for any Markov random walk on {Z} with periodic jump rates. Finally, we review in the present context some large deviation results of Faggionato and Silvestri (2017 Ann. Inst. Henri Poincaré 53 46-78) and give some specific examples with explicit computations.

  1. Effect of Stress on Energy Flux Deviation of Ultrasonic Waves in Ultrasonic Waves in GR/EP Composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1990-01-01

    Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis fiber axis) and the x1 axis for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers new nondestructive technique of evaluating stress in composites.

  2. The emergence of retail-based clinics in the United States: early observations.

    PubMed

    Laws, Margaret; Scott, Mary Kate

    2008-01-01

    Retail-based clinics have proliferated rapidly in the past two years, with approximately 1,000 sites in thirty-seven states representing almost three million cumulative visits. Clinic operators have evolved from a dispersed group of privately financed concerns to a concentrated, largely corporate-owned group. A major development has been the move to large-scale acceptance of insurance, deviating from the initial cash-pay model. Consumers' acceptance and the fact that the clinics appear to increase access for both the uninsured and the insured has encouraged providers and policymakers to consider this approach to basic, acute care while seeking a better understanding of these clinics.

  3. Identification and control of structures in space

    NASA Technical Reports Server (NTRS)

    Meirovitch, L.; Quinn, R. D.; Norris, M. A.

    1984-01-01

    The derivation of the equations of motion for the Spacecraft Control Laboratory Experiment (SCOLE) is reported and the equations of motion of a similar structure orbiting the earth are also derived. The structure is assumed to undergo large rigid-body maneuvers and small elastic deformations. A perturbation approach is proposed whereby the quantities defining the rigid-body maneuver are assumed to be relatively large, with the elastic deformations and deviations from the rigid-body maneuver being relatively small. The perturbation equations have the form of linear equations with time-dependent coefficients. An active control technique can then be formulated to permit maneuvering of the spacecraft and simultaneously suppressing the elastic vibration.

  4. Random matrix approach to cross correlations in financial data

    NASA Astrophysics Data System (ADS)

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene

    2002-06-01

    We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices C of returns constructed from (i) 30-min returns of 1000 US stocks for the 2-yr period 1994-1995, (ii) 30-min returns of 881 US stocks for the 2-yr period 1996-1997, and (iii) 1-day returns of 422 US stocks for the 35-yr period 1962-1996. We test the statistics of the eigenvalues λi of C against a ``null hypothesis'' - a random correlation matrix constructed from mutually uncorrelated time series. We find that a majority of the eigenvalues of C fall within the RMT bounds [λ-,λ+] for the eigenvalues of random correlation matrices. We test the eigenvalues of C within the RMT bound for universal properties of random matrices and find good agreement with the results for the Gaussian orthogonal ensemble of random matrices-implying a large degree of randomness in the measured cross-correlation coefficients. Further, we find that the distribution of eigenvector components for the eigenvectors corresponding to the eigenvalues outside the RMT bound display systematic deviations from the RMT prediction. In addition, we find that these ``deviating eigenvectors'' are stable in time. We analyze the components of the deviating eigenvectors and find that the largest eigenvalue corresponds to an influence common to all stocks. Our analysis of the remaining deviating eigenvectors shows distinct groups, whose identities correspond to conventionally identified business sectors. Finally, we discuss applications to the construction of portfolios of stocks that have a stable ratio of risk to return.

  5. 78 FR 6232 - Energy Conservation Program: Test Procedures for Conventional Cooking Products With Induction...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 69.79 1.59 1.97... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 64.52 0.87 1.08... technology unit % % ( ) % Large A Electric Coil... 1 79.81 1.66 2.06 B Electric........ 1 61.81 2.83 3.52...

  6. Implicit Incompressible SPH.

    PubMed

    Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias

    2013-07-25

    We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01% can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.

  7. Implicit incompressible SPH.

    PubMed

    Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias

    2014-03-01

    We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01 percent can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.

  8. Gibbs Ensembles for Nearly Compatible and Incompatible Conditional Models

    PubMed Central

    Chen, Shyh-Huei; Wang, Yuchung J.

    2010-01-01

    Gibbs sampler has been used exclusively for compatible conditionals that converge to a unique invariant joint distribution. However, conditional models are not always compatible. In this paper, a Gibbs sampling-based approach — Gibbs ensemble —is proposed to search for a joint distribution that deviates least from a prescribed set of conditional distributions. The algorithm can be easily scalable such that it can handle large data sets of high dimensionality. Using simulated data, we show that the proposed approach provides joint distributions that are less discrepant from the incompatible conditionals than those obtained by other methods discussed in the literature. The ensemble approach is also applied to a data set regarding geno-polymorphism and response to chemotherapy in patients with metastatic colorectal PMID:21286232

  9. A General Conditional Large Deviation Principle

    DOE PAGES

    La Cour, Brian R.; Schieve, William C.

    2015-07-18

    Given a sequence of Borel probability measures on a Hausdorff space which satisfy a large deviation principle (LDP), we consider the corresponding sequence of measures formed by conditioning on a set B. If the large deviation rate function I is good and effectively continuous, and the conditioning set has the property that (1)more » $$\\overline{B°}$$=$$\\overline{B}$$ and (2) I(x)<∞ for all xε$$\\overline{B}$$, then the sequence of conditional measures satisfies a LDP with the good, effectively continuous rate function I B, where I B(x)=I(x)-inf I(B) if xε$$\\overline{B}$$ and I B(x)=∞ otherwise.« less

  10. Successful treatment of open jaw and jaw deviation dystonia with botulinum toxin using a simple intraoral approach.

    PubMed

    Moscovich, Mariana; Chen, Zhongxing Peng; Rodriguez, Ramon

    2015-03-01

    Oromandibular dystonia (OMD) is a focal dystonia that involves the mouth, jaw, and/or tongue. It can be classified as idiopathic, tardive dystonia or secondary to other neurological disorders and subdivided into jaw opening, jaw closing, jaw deviation and lip pursing. The muscles involved in jaw opening dystonia are usually the digastrics and lateral pterygoids. It is known that the lateral pterygoids may be approached both internally and externally. The external approach is the most common; however neurologists experienced in treating patients with botulinum toxin can safely and with no extra cost perform the intraoral procedure. We report our experience in the treatment of jaw opening and jaw deviation dystonia using the intraoral injection approach. Eight patients were selected from the University of Florida with a clinical diagnosis of open jaw/jaw deviation dystonia. All of them were injected with onabotulinum toxin A using the internal approach and the clinical global impression scale was applied. The mean age of the patients was 67 (standard deviation [SD] 10.2) years, with a disease duration of 10.2 (SD 7.7) years and the mean distance they traveled to our institution was 448 km (278 miles). After treatment, six patients scored as very much improved in the clinical global impression scale and two patients scored as much improved. Only one patient reported an adverse event of nasal speech following one of the injections that improved after 4 weeks. Botulinum toxin injections for open jaw/jaw deviation dystonia can be safely performed with the intraoral approach without the need of special devices other than electromyography. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. TRASYS form factor matrix normalization

    NASA Technical Reports Server (NTRS)

    Tsuyuki, Glenn T.

    1992-01-01

    A method has been developed for adjusting a TRASYS enclosure form factor matrix to unity. This approach is not limited to closed geometries, and in fact, it is primarily intended for use with open geometries. The purpose of this approach is to prevent optimistic form factors to space. In this method, nodal form factor sums are calculated within 0.05 of unity using TRASYS, although deviations as large as 0.10 may be acceptable, and then, a process is employed to distribute the difference amongst the nodes. A specific example has been analyzed with this method, and a comparison was performed with a standard approach for calculating radiation conductors. In this comparison, hot and cold case temperatures were determined. Exterior nodes exhibited temperature differences as large as 7 C and 3 C for the hot and cold cases, respectively when compared with the standard approach, while interior nodes demonstrated temperature differences from 0 C to 5 C. These results indicate that temperature predictions can be artificially biased if the form factor computation error is lumped into the individual form factors to space.

  12. Large Deviations: Advanced Probability for Undergrads

    ERIC Educational Resources Information Center

    Rolls, David A.

    2007-01-01

    In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…

  13. Integration of the Response Surface Methodology with the Compromise Decision Support Problem in Developing a General Robust Design Procedure

    NASA Technical Reports Server (NTRS)

    Chen, Wei; Tsui, Kwok-Leung; Allen, Janet K.; Mistree, Farrokh

    1994-01-01

    In this paper we introduce a comprehensive and rigorous robust design procedure to overcome some limitations of the current approaches. A comprehensive approach is general enough to model the two major types of robust design applications, namely, robust design associated with the minimization of the deviation of performance caused by the deviation of noise factors (uncontrollable parameters), and robust design due to the minimization of the deviation of performance caused by the deviation of control factors (design variables). We achieve mathematical rigor by using, as a foundation, principles from the design of experiments and optimization. Specifically, we integrate the Response Surface Method (RSM) with the compromise Decision Support Problem (DSP). Our approach is especially useful for design problems where there are no closed-form solutions and system performance is computationally expensive to evaluate. The design of a solar powered irrigation system is used as an example. Our focus in this paper is on illustrating our approach rather than on the results per se.

  14. Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions

    NASA Astrophysics Data System (ADS)

    Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard

    2017-12-01

    Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).

  15. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE PAGES

    Dupuis, Paul; Johnson, Dane

    2017-11-17

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  16. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Paul; Johnson, Dane

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  17. Clinical comparison between the retromandibular approach for reduction and fixation and endoscope-assisted open reduction and internal fixation for mandibular condyle fractures.

    PubMed

    Nogami, Shinnosuke; Takahashi, Tetsu; Yamauchi, Kensuke; Miyamoto, Ikuya; Kaneuji, Takeshi; Yamamoto, Noriaki; Yoshiga, Daigo; Yamashita, Yoshihiro

    2012-11-01

    Endoscope-assisted transoral open reduction and internal fixation (EAORIF) for mandibular condyle fractures has recently become popular because it is minimally invasive, provides excellent visibility without a large incision, and reduces surgical scarring and the risk of facial nerve injury. This report describes a retrospective clinical study that compared certain clinical parameters, including postoperative function, between the retromandibular (RM) approach and EAORIF. Fifteen patients were treated by the RM approach, whereas 15 underwent EAORIF between July 2006 and September 2011 at Kyushu Dental College, Japan. Clinical indices comprised fracture line, fracture type, number of plates used, surgical duration, bleeding amount, and functional items, including maximum interincisal opening, mandibular deviation on the opening pathway, malocclusion, facial paresthesia, and temporomandibular joint pain and clicking. The areas subjected to either approach included lower neck and subcondyle. The RM approach was used for mandibular condyle fractures with dislocation of a small bone segment. Both groups used 2 plates in all cases. Surgical duration, maximum interincisal opening, mandibular deviation, occlusion, and temporomandibular joint function at 6 months after surgery were comparable between groups. The average bleeding amount in the EAORIF group was greater than in the RM group. One patient from the RM group developed facial paresthesia that persisted for 6 months after surgery. It was concluded that surgical treatment was suitable for fractures of the lower neck and subcondylar. Both procedures showed good results in the functional items of this study.

  18. Large-scale structure perturbation theory without losing stream crossing

    NASA Astrophysics Data System (ADS)

    McDonald, Patrick; Vlah, Zvonimir

    2018-01-01

    We suggest an approach to perturbative calculations of large-scale clustering in the Universe that includes from the start the stream crossing (multiple velocities for mass elements at a single position) that is lost in traditional calculations. Starting from a functional integral over displacement, the perturbative series expansion is in deviations from (truncated) Zel'dovich evolution, with terms that can be computed exactly even for stream-crossed displacements. We evaluate the one-loop formulas for displacement and density power spectra numerically in 1D, finding dramatic improvement in agreement with N-body simulations compared to the Zel'dovich power spectrum (which is exact in 1D up to stream crossing). Beyond 1D, our approach could represent an improvement over previous expansions even aside from the inclusion of stream crossing, but we have not investigated this numerically. In the process we show how to achieve effective-theory-like regulation of small-scale fluctuations without free parameters.

  19. Comparison of Predicted Thermoelectric Energy Conversion Efficiency by Cumulative Properties and Reduced Variables Approaches

    NASA Astrophysics Data System (ADS)

    Linker, Thomas M.; Lee, Glenn S.; Beekman, Matt

    2018-06-01

    The semi-analytical methods of thermoelectric energy conversion efficiency calculation based on the cumulative properties approach and reduced variables approach are compared for 21 high performance thermoelectric materials. Both approaches account for the temperature dependence of the material properties as well as the Thomson effect, thus the predicted conversion efficiencies are generally lower than that based on the conventional thermoelectric figure of merit ZT for nearly all of the materials evaluated. The two methods also predict material energy conversion efficiencies that are in very good agreement which each other, even for large temperature differences (average percent difference of 4% with maximum observed deviation of 11%). The tradeoff between obtaining a reliable assessment of a material's potential for thermoelectric applications and the complexity of implementation of the three models, as well as the advantages of using more accurate modeling approaches in evaluating new thermoelectric materials, are highlighted.

  20. MISFITS: evaluating the goodness of fit between a phylogenetic model and an alignment.

    PubMed

    Nguyen, Minh Anh Thi; Klaere, Steffen; von Haeseler, Arndt

    2011-01-01

    As models of sequence evolution become more and more complicated, many criteria for model selection have been proposed, and tools are available to select the best model for an alignment under a particular criterion. However, in many instances the selected model fails to explain the data adequately as reflected by large deviations between observed pattern frequencies and the corresponding expectation. We present MISFITS, an approach to evaluate the goodness of fit (http://www.cibiv.at/software/misfits). MISFITS introduces a minimum number of "extra substitutions" on the inferred tree to provide a biologically motivated explanation why the alignment may deviate from expectation. These extra substitutions plus the evolutionary model then fully explain the alignment. We illustrate the method on several examples and then give a survey about the goodness of fit of the selected models to the alignments in the PANDIT database.

  1. Largely reduced grid densities in a vibrational self-consistent field treatment do not significantly impact the resultingwavenumbers.

    PubMed

    Lutz, Oliver M D; Rode, Bernd M; Bonn, Günther K; Huck, Christian W

    2014-12-17

    Especially for larger molecules relevant to life sciences, vibrational self-consistent field (VSCF) calculations can become unmanageably demanding even when only first and second order potential coupling terms are considered. This paper investigates to what extent the grid density of the VSCF's underlying potential energy surface can be reduced without sacrificing accuracy of the resulting wavenumbers. Including single-mode and pair contributions, a reduction to eight points per mode did not introduce a significant deviation but improved the computational efficiency by a factor of four. A mean unsigned deviation of 1.3% from the experiment could be maintained for the fifteen molecules under investigation and the approach was found to be applicable to rigid, semi-rigid and soft vibrational problems likewise. Deprotonated phosphoserine, stabilized by two intramolecular hydrogen bonds, was investigated as an exemplary application.

  2. A comparison of portfolio selection models via application on ISE 100 index data

    NASA Astrophysics Data System (ADS)

    Altun, Emrah; Tatlidil, Hüseyin

    2013-10-01

    Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.

  3. Generic Feature Selection with Short Fat Data

    PubMed Central

    Clarke, B.; Chu, J.-H.

    2014-01-01

    SUMMARY Consider a regression problem in which there are many more explanatory variables than data points, i.e., p ≫ n. Essentially, without reducing the number of variables inference is impossible. So, we group the p explanatory variables into blocks by clustering, evaluate statistics on the blocks and then regress the response on these statistics under a penalized error criterion to obtain estimates of the regression coefficients. We examine the performance of this approach for a variety of choices of n, p, classes of statistics, clustering algorithms, penalty terms, and data types. When n is not large, the discrimination over number of statistics is weak, but computations suggest regressing on approximately [n/K] statistics where K is the number of blocks formed by a clustering algorithm. Small deviations from this are observed when the blocks of variables are of very different sizes. Larger deviations are observed when the penalty term is an Lq norm with high enough q. PMID:25346546

  4. Stochastic growth of cloud droplets by collisions during settling

    NASA Astrophysics Data System (ADS)

    Madival, Deepak G.

    2018-04-01

    In the last stage of droplet growth in clouds which leads to drizzle formation, larger droplets begin to settle under gravity and collide and coalesce with smaller droplets in their path. In this article, we shall deal with the simplified problem of a large drop settling amidst a population of identical smaller droplets. We present an expression for the probability that a given large drop suffers a given number of collisions, for a general statistically homogeneous distribution of droplets. We hope that our approach will serve as a valuable tool in dealing with droplet distribution in real clouds, which has been found to deviate from the idealized Poisson distribution due to mechanisms such as inertial clustering.

  5. Numerical Large Deviation Analysis of the Eigenstate Thermalization Hypothesis

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Toru; Iyoda, Eiki; Sagawa, Takahiro

    2018-05-01

    A plausible mechanism of thermalization in isolated quantum systems is based on the strong version of the eigenstate thermalization hypothesis (ETH), which states that all the energy eigenstates in the microcanonical energy shell have thermal properties. We numerically investigate the ETH by focusing on the large deviation property, which directly evaluates the ratio of athermal energy eigenstates in the energy shell. As a consequence, we have systematically confirmed that the strong ETH is indeed true even for near-integrable systems. Furthermore, we found that the finite-size scaling of the ratio of athermal eigenstates is a double exponential for nonintegrable systems. Our result illuminates the universal behavior of quantum chaos, and suggests that a large deviation analysis would serve as a powerful method to investigate thermalization in the presence of the large finite-size effect.

  6. First-Principles Momentum-Dependent Local Ansatz Wavefunction and Momentum Distribution Function Bands of Iron

    NASA Astrophysics Data System (ADS)

    Kakehashi, Yoshiro; Chandra, Sumal

    2016-04-01

    We have developed a first-principles local ansatz wavefunction approach with momentum-dependent variational parameters on the basis of the tight-binding LDA+U Hamiltonian. The theory goes beyond the first-principles Gutzwiller approach and quantitatively describes correlated electron systems. Using the theory, we find that the momentum distribution function (MDF) bands of paramagnetic bcc Fe along high-symmetry lines show a large deviation from the Fermi-Dirac function for the d electrons with eg symmetry and yield the momentum-dependent mass enhancement factors. The calculated average mass enhancement m*/m = 1.65 is consistent with low-temperature specific heat data as well as recent angle-resolved photoemission spectroscopy (ARPES) data.

  7. MO-F-CAMPUS-T-03: Data Driven Approaches for Determination of Treatment Table Tolerance Values for Record and Verification Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, N; DiCostanzo, D; Fullenkamp, M

    2015-06-15

    Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less

  8. Locality and nonlocality of classical restrictions of quantum spin systems with applications to quantum large deviations and entanglement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Roeck, W., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Maes, C., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Schütz, M., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be

    2015-02-15

    We study the projection on classical spins starting from quantum equilibria. We show Gibbsianness or quasi-locality of the resulting classical spin system for a class of gapped quantum systems at low temperatures including quantum ground states. A consequence of Gibbsianness is the validity of a large deviation principle in the quantum system which is known and here recovered in regimes of high temperature or for thermal states in one dimension. On the other hand, we give an example of a quantum ground state with strong nonlocality in the classical restriction, giving rise to what we call measurement induced entanglement andmore » still satisfying a large deviation principle.« less

  9. Hoeffding Type Inequalities and their Applications in Statistics and Operations Research

    NASA Astrophysics Data System (ADS)

    Daras, Tryfon

    2007-09-01

    Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.

  10. Distribution of diameters for Erdős-Rényi random graphs.

    PubMed

    Hartmann, A K; Mézard, M

    2018-03-01

    We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c. The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P(d) numerically for various values of c, in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10^{-100} which allow us to obtain the distribution over basically the full range of the support, for graphs up to N=1000 nodes. For values c<1, our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c>1 the distribution is more complex and no complete analytical results are available. For this parameter range, P(d) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c, we determined the finite-size rate function Φ(d/N) and were able to extrapolate numerically to N→∞, indicating that the large-deviation principle holds.

  11. Distribution of diameters for Erdős-Rényi random graphs

    NASA Astrophysics Data System (ADS)

    Hartmann, A. K.; Mézard, M.

    2018-03-01

    We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c . The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P (d ) numerically for various values of c , in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10-100 which allow us to obtain the distribution over basically the full range of the support, for graphs up to N =1000 nodes. For values c <1 , our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c >1 the distribution is more complex and no complete analytical results are available. For this parameter range, P (d ) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c , we determined the finite-size rate function Φ (d /N ) and were able to extrapolate numerically to N →∞ , indicating that the large-deviation principle holds.

  12. Generic dynamical phase transition in one-dimensional bulk-driven lattice gases with exclusion

    NASA Astrophysics Data System (ADS)

    Lazarescu, Alexandre

    2017-06-01

    Dynamical phase transitions are crucial features of the fluctuations of statistical systems, corresponding to boundaries between qualitatively different mechanisms of maintaining unlikely values of dynamical observables over long periods of time. They manifest themselves in the form of non-analyticities in the large deviation function of those observables. In this paper, we look at bulk-driven exclusion processes with open boundaries. It is known that the standard asymmetric simple exclusion process exhibits a dynamical phase transition in the large deviations of the current of particles flowing through it. That phase transition has been described thanks to specific calculation methods relying on the model being exactly solvable, but more general methods have also been used to describe the extreme large deviations of that current, far from the phase transition. We extend those methods to a large class of models based on the ASEP, where we add arbitrary spatial inhomogeneities in the rates and short-range potentials between the particles. We show that, as for the regular ASEP, the large deviation function of the current scales differently with the size of the system if one considers very high or very low currents, pointing to the existence of a dynamical phase transition between those two regimes: high current large deviations are extensive in the system size, and the typical states associated to them are Coulomb gases, which are highly correlated; low current large deviations do not depend on the system size, and the typical states associated to them are anti-shocks, consistently with a hydrodynamic behaviour. Finally, we illustrate our results numerically on a simple example, and we interpret the transition in terms of the current pushing beyond its maximal hydrodynamic value, as well as relate it to the appearance of Tracy-Widom distributions in the relaxation statistics of such models. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Alexandre Lazarescu was selected by the Editorial Board of J. Phys. A as an Emerging Talent.

  13. Approximate median regression for complex survey data with skewed response.

    PubMed

    Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi

    2016-12-01

    The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.

  14. Approximate Median Regression for Complex Survey Data with Skewed Response

    PubMed Central

    Fraser, Raphael André; Lipsitz, Stuart R.; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Pan, Yi

    2016-01-01

    Summary The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling and weighting. In this paper, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS) based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. PMID:27062562

  15. A Large Deviations Analysis of Certain Qualitative Properties of Parallel Tempering and Infinite Swapping Algorithms

    DOE PAGES

    Doll, J.; Dupuis, P.; Nyquist, P.

    2017-02-08

    Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less

  16. Quantum stochastic thermodynamic on harmonic networks

    DOE PAGES

    Deffner, Sebastian

    2016-01-04

    Fluctuation theorems are symmetry relations for the probability to observe an amount of entropy production in a finite-time process. In a recent paper Pigeon et al (2016 New. J. Phys. 18 013009) derived fluctuation theorems for harmonic networks by means of the large deviation theory. Furthermore, their novel approach is illustrated with various examples of experimentally relevant systems. As a main result, however, Pigeon et al provide new insight how to consistently formulate quantum stochastic thermodynamics, and provide new and robust tools for the study of the thermodynamics of quantum harmonic networks.

  17. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  18. Quantum stochastic thermodynamic on harmonic networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deffner, Sebastian

    Fluctuation theorems are symmetry relations for the probability to observe an amount of entropy production in a finite-time process. In a recent paper Pigeon et al (2016 New. J. Phys. 18 013009) derived fluctuation theorems for harmonic networks by means of the large deviation theory. Furthermore, their novel approach is illustrated with various examples of experimentally relevant systems. As a main result, however, Pigeon et al provide new insight how to consistently formulate quantum stochastic thermodynamics, and provide new and robust tools for the study of the thermodynamics of quantum harmonic networks.

  19. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  20. Evaluation of Large-scale Data to Detect Irregularity in Payment for Medical Services. An Extended Use of Benford's Law.

    PubMed

    Park, Junghyun A; Kim, Minki; Yoon, Seokjoon

    2016-05-17

    Sophisticated anti-fraud systems for the healthcare sector have been built based on several statistical methods. Although existing methods have been developed to detect fraud in the healthcare sector, these algorithms consume considerable time and cost, and lack a theoretical basis to handle large-scale data. Based on mathematical theory, this study proposes a new approach to using Benford's Law in that we closely examined the individual-level data to identify specific fees for in-depth analysis. We extended the mathematical theory to demonstrate the manner in which large-scale data conform to Benford's Law. Then, we empirically tested its applicability using actual large-scale healthcare data from Korea's Health Insurance Review and Assessment (HIRA) National Patient Sample (NPS). For Benford's Law, we considered the mean absolute deviation (MAD) formula to test the large-scale data. We conducted our study on 32 diseases, comprising 25 representative diseases and 7 DRG-regulated diseases. We performed an empirical test on 25 diseases, showing the applicability of Benford's Law to large-scale data in the healthcare industry. For the seven DRG-regulated diseases, we examined the individual-level data to identify specific fees to carry out an in-depth analysis. Among the eight categories of medical costs, we considered the strength of certain irregularities based on the details of each DRG-regulated disease. Using the degree of abnormality, we propose priority action to be taken by government health departments and private insurance institutions to bring unnecessary medical expenses under control. However, when we detect deviations from Benford's Law, relatively high contamination ratios are required at conventional significance levels.

  1. Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger

    2018-05-01

    In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.

  2. Endometrioid adenocarcinoma of the uterus with a minimal deviation invasive pattern.

    PubMed

    Landry, D; Mai, K T; Senterman, M K; Perkins, D G; Yazdi, H M; Veinot, J P; Thomas, J

    2003-01-01

    Minimal deviation adenocarcinoma of endometrioid type is a rare pathological entity. We describe a variant of typical endometrioid adenocarcinoma associated with minimal deviation adenocarcinoma of endometrioid type. One 'pilot' case of minimal deviation adenocarcinoma of endometrioid type associated with typical endometrioid adenocarcinoma was encountered at our institution in 2001. A second case of same type was received in consultation. We reviewed 168 consecutive hysterectomy specimens diagnosed with 'endometrioid adenocarcinoma' specifically to identify areas of minimal deviation adenocarcinoma of endometrioid type. Immunohistochemistry was done with the following antibodies: MIB1, p53, oestrogen receptor (ER), progesterone receptor (PR), cytokeratin 7 (CK7), cytokeratin 20 (CK20), carcinoembryonic antigen (CEA), and vimentin (VIM). Four additional cases of minimal deviation adenocarcinoma of endometrioid type were identified. All six cases of minimal deviation adenocarcinoma of endometrioid type were associated with superficial endometrioid adenocarcinoma. In two cases with a large amount of minimal deviation adenocarcinoma of endometrioid type, the cervix was involved. The immunoprofile of two representative cases was ER+, PR+, CK7+, CK20-, CEA-, VIM+. MIB1 immunostaining of four cases revealed little proliferative activity of the minimal deviation adenocarcinoma of endometrioid type glandular cells (0-1%) compared with the associated 'typical' endometrioid adenocarcinoma (20-30%). The same four cases showed no p53 immunostaining in minimal deviation adenocarcinoma of endometrioid type compared with a range of positive staining in the associated endometrioid adenocarcinoma. Minimal deviation adenocarcinoma of endometrioid type more often develops as a result of differentiation from typical endometrioid adenocarcinoma than de novo. Due to its deceptively benign microscopic appearance, minimal deviation adenocarcinoma of endometrioid type may be overlooked and may lead to incorrect assessment of tumour depth and pathological stage. There was a tendency for tumour with a large amount of minimal deviation adenocarcinoma of endometrioid type to invade the cervix.

  3. Adaptive Gain-based Stable Power Smoothing of a DFIG

    DOE PAGES

    Muljadi, Eduard; Lee, Hyewon; Hwang, Min; ...

    2017-11-01

    In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  4. Adaptive Gain-based Stable Power Smoothing of a DFIG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Lee, Hyewon; Hwang, Min

    In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  5. MUSiC—An Automated Scan for Deviations between Data and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Meyer, Arnd

    2010-02-01

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  6. MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Arnd

    2010-02-10

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  7. Assessing Explosives Safety Risks, Deviations, And Consequences

    DTIC Science & Technology

    2009-07-31

    Technical Paper 23 31 July 2009 DDESB Assessing Explosives Safety Risks, Deviations, And Consequences ...Deviations, And Consequences 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...and approaches to assist warfighters in executing their mission, conserving resources, and maximizing operational effectiveness . When mission risk

  8. Power-Smoothing Scheme of a DFIG Using the Adaptive Gain Depending on the Rotor Speed and Frequency Deviation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyewon; Hwang, Min; Muljadi, Eduard

    In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  9. Power-Smoothing Scheme of a DFIG Using the Adaptive Gain Depending on the Rotor Speed and Frequency Deviation

    DOE PAGES

    Lee, Hyewon; Hwang, Min; Muljadi, Eduard; ...

    2017-04-18

    In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  10. Retrieval of Aerosol Optical Properties from Ground-Based Remote Sensing Measurements: Aerosol Asymmetry Factor and Single Scattering Albedo

    NASA Astrophysics Data System (ADS)

    Qie, L.; Li, Z.; Li, L.; Li, K.; Li, D.; Xu, H.

    2018-04-01

    The Devaux-Vermeulen-Li method (DVL method) is a simple approach to retrieve aerosol optical parameters from the Sun-sky radiance measurements. This study inherited the previous works of retrieving aerosol single scattering albedo (SSA) and scattering phase function, the DVL method was modified to derive aerosol asymmetric factor (g). To assess the algorithm performance at various atmospheric aerosol conditions, retrievals from AERONET observations were implemented, and the results are compared with AERONET official products. The comparison shows that both the DVL SSA and g were well correlated with those of AERONET. The RMSD and the absolute value of MBD deviations between the SSAs are 0.025 and 0.015 respectively, well below the AERONET declared SSA uncertainty of 0.03 for all wavelengths. For asymmetry factor g, the RMSD deviations are smaller than 0.02 and the absolute values of MBDs smaller than 0.01 at 675, 870 and 1020 nm bands. Then, considering several factors probably affecting retrieval quality (i.e. the aerosol optical depth (AOD), the solar zenith angle, and the sky residual error, sphericity proportion and Ångström exponent), the deviations for SSA and g of these two algorithms were calculated at varying value intervals. Both the SSA and g deviations were found decrease with the AOD and the solar zenith angle, and increase with sky residual error. However, the deviations do not show clear sensitivity to the sphericity proportion and Ångström exponent. This indicated that the DVL algorithm is available for both large, non-spherical particles and spherical particles. The DVL results are suitable for the evaluation of aerosol direct radiative effects of different aerosol types.

  11. Large-scale structure perturbation theory without losing stream crossing

    DOE PAGES

    McDonald, Patrick; Vlah, Zvonimir

    2018-01-10

    Here, we suggest an approach to perturbative calculations of large-scale clustering in the Universe that includes from the start the stream crossing (multiple velocities for mass elements at a single position) that is lost in traditional calculations. Starting from a functional integral over displacement, the perturbative series expansion is in deviations from (truncated) Zel’dovich evolution, with terms that can be computed exactly even for stream-crossed displacements. We evaluate the one-loop formulas for displacement and density power spectra numerically in 1D, finding dramatic improvement in agreement with N-body simulations compared to the Zel’dovich power spectrum (which is exact in 1D upmore » to stream crossing). Beyond 1D, our approach could represent an improvement over previous expansions even aside from the inclusion of stream crossing, but we have not investigated this numerically. In the process we show how to achieve effective-theory-like regulation of small-scale fluctuations without free parameters.« less

  12. Large-scale structure perturbation theory without losing stream crossing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, Patrick; Vlah, Zvonimir

    Here, we suggest an approach to perturbative calculations of large-scale clustering in the Universe that includes from the start the stream crossing (multiple velocities for mass elements at a single position) that is lost in traditional calculations. Starting from a functional integral over displacement, the perturbative series expansion is in deviations from (truncated) Zel’dovich evolution, with terms that can be computed exactly even for stream-crossed displacements. We evaluate the one-loop formulas for displacement and density power spectra numerically in 1D, finding dramatic improvement in agreement with N-body simulations compared to the Zel’dovich power spectrum (which is exact in 1D upmore » to stream crossing). Beyond 1D, our approach could represent an improvement over previous expansions even aside from the inclusion of stream crossing, but we have not investigated this numerically. In the process we show how to achieve effective-theory-like regulation of small-scale fluctuations without free parameters.« less

  13. Modelling the dispersion and transport of reactive pollutants in a deep urban street canyon: using large-eddy simulation.

    PubMed

    Zhong, Jian; Cai, Xiao-Ming; Bloss, William James

    2015-05-01

    This study investigates the dispersion and transport of reactive pollutants in a deep urban street canyon with an aspect ratio of 2 under neutral meteorological conditions using large-eddy simulation. The spatial variation of pollutants is significant due to the existence of two unsteady vortices. The deviation of species abundance from chemical equilibrium for the upper vortex is greater than that for the lower vortex. The interplay of dynamics and chemistry is investigated using two metrics: the photostationary state defect, and the inferred ozone production rate. The latter is found to be negative at all locations within the canyon, pointing to a systematic negative offset to ozone production rates inferred by analogous approaches in environments with incomplete mixing of emissions. This study demonstrates an approach to quantify parameters for a simplified two-box model, which could support traffic management and urban planning strategies and personal exposure assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A new multiple air beam approach for in-process form error optical measurement

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Li, R.

    2018-07-01

    In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.

  15. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    PubMed

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  16. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2012-09-30

    Estimation Methods for Underwater OFDM 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. 6) Asynchronous Multiuser...multi-input multi-output ( MIMO ) OFDM is also pursued, where it is shown that the proposed hybrid initialization enables drastically improved receiver...are investigated. 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. This work studies a distributed system with

  17. Extended-Range High-Resolution Dynamical Downscaling over a Continental-Scale Domain

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    High-resolution mesoscale simulations, when applied for downscaling meteorological fields over large spatial domains and for extended time periods, can provide valuable information for many practical application scenarios including the weather-dependent renewable energy industry. In the present study, a strategy has been proposed to dynamically downscale coarse-resolution meteorological fields from Environment Canada's regional analyses for a period of multiple years over the entire Canadian territory. The study demonstrates that a continuous mesoscale simulation over the entire domain is the most suitable approach in this regard. Large-scale deviations in the different meteorological fields pose the biggest challenge for extended-range simulations over continental scale domains, and the enforcement of the lateral boundary conditions is not sufficient to restrict such deviations. A scheme has therefore been developed to spectrally nudge the simulated high-resolution meteorological fields at the different model vertical levels towards those embedded in the coarse-resolution driving fields derived from the regional analyses. A series of experiments were carried out to determine the optimal nudging strategy including the appropriate nudging length scales, nudging vertical profile and temporal relaxation. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil-moisture, and snow conditions, towards their expected values obtained from a high-resolution offline surface scheme was also devised to limit any considerable deviation in the evolving surface fields due to extended-range temporal integrations. The study shows that ensuring large-scale atmospheric similarities helps to deliver near-surface statistical scores for temperature, dew point temperature and horizontal wind speed that are better or comparable to the operational regional forecasts issued by Environment Canada. Furthermore, the meteorological fields resulting from the proposed downscaling strategy have significantly improved spatiotemporal variance compared to those from the operational forecasts, and any time series generated from the downscaled fields do not suffer from discontinuities due to switching between the consecutive forecasts.

  18. Multiple sensor fault diagnosis for dynamic processes.

    PubMed

    Li, Cheng-Chih; Jeng, Jyh-Cheng

    2010-10-01

    Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  19. The statistical treatment implemented to obtain the planetary protection bioburdens for the Mars Science Laboratory mission

    NASA Astrophysics Data System (ADS)

    Beaudet, Robert A.

    2013-06-01

    NASA Planetary Protection Policy requires that Category IV missions such as those going to the surface of Mars include detailed assessment and documentation of the bioburden on the spacecraft at launch. In the prior missions to Mars, the approaches used to estimate the bioburden could easily be conservative without penalizing the project because spacecraft elements such as the descent and landing stages had relatively small surface areas and volumes. With the advent of a large spacecraft such as Mars Science Laboratory (MSL), it became necessary for a modified—still conservative but more pragmatic—statistical treatment be used to obtain the standard deviations and the bioburden densities at about the 99.9% confidence limits. This article describes both the Gaussian and Poisson statistics that were implemented to analyze the bioburden data from the MSL spacecraft prior to launch. The standard deviations were weighted by the areas sampled with each swab or wipe. Some typical cases are given and discussed.

  20. Finding new pathway-specific regulators by clustering method using threshold standard deviation based on DNA chip data of Streptomyces coelicolor.

    PubMed

    Yang, Yung-Hun; Kim, Ji-Nu; Song, Eunjung; Kim, Eunjung; Oh, Min-Kyu; Kim, Byung-Gee

    2008-09-01

    In order to identify the regulators involved in antibiotic production or time-specific cellular events, the messenger ribonucleic acid (mRNA) expression data of the two gene clusters, actinorhodin (ACT) and undecylprodigiosin (RED) biosynthetic genes, were clustered with known mRNA expression data of regulators from S. coelicolor using a filtering method based on standard deviation and clustering analysis. The result identified five regulators including two well-known regulators namely, SCO3579 (WlbA) and SCO6722 (SsgD). Using overexpression and deletion of the regulator genes, we were able to identify two regulators, i.e., SCO0608 and SCO6808, playing roles as repressors in antibiotics production and sporulation. This approach can be easily applied to mapping out new regulators related to any interesting target gene clusters showing characteristic expression patterns. The result can also be used to provide insightful information on the selection rules among a large number of regulators.

  1. Current status of 3D EPID-based in vivo dosimetry in The Netherlands Cancer Institute

    NASA Astrophysics Data System (ADS)

    Mijnheer, B.; Olaciregui-Ruiz, I.; Rozendaal, R.; Spreeuw, H.; van Herk, M.; Mans, A.

    2015-01-01

    3D in vivo dose verification using a-Si EPIDs is performed routinely in our institution for almost all RT treatments. The EPID-based 3D dose distribution is reconstructed using a back-projection algorithm and compared with the planned dose distribution using 3D gamma evaluation. Dose-reconstruction and gamma-evaluation software runs automatically, and deviations outside the alert criteria are immediately available and investigated, in combination with inspection of cone-beam CT scans. The implementation of our 3D EPID- based in vivo dosimetry approach was able to replace pre-treatment verification for more than 90% of the patient treatments. Clinically relevant deviations could be detected for approximately 1 out of 300 patient treatments (IMRT and VMAT). Most of these errors were patient related anatomical changes or deviations from the routine clinical procedure, and would not have been detected by pre-treatment verification. Moreover, 3D EPID-based in vivo dose verification is a fast and accurate tool to assure the safe delivery of RT treatments. It provides clinically more useful information and is less time consuming than pre-treatment verification measurements. Automated 3D in vivo dosimetry is therefore a prerequisite for large-scale implementation of patient-specific quality assurance of RT treatments.

  2. Convex hulls of random walks in higher dimensions: A large-deviation study

    NASA Astrophysics Data System (ADS)

    Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.

    2017-12-01

    The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .

  3. Mathematical solution of multilevel fractional programming problem with fuzzy goal programming approach

    NASA Astrophysics Data System (ADS)

    Lachhwani, Kailash; Poonia, Mahaveer Prasad

    2012-08-01

    In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.

  4. Work fluctuations for a Brownian particle between two thermostats

    NASA Astrophysics Data System (ADS)

    Visco, Paolo

    2006-06-01

    We explicitly determine the large deviation function of the energy flow of a Brownian particle coupled to two heat baths at different temperatures. This toy model, initially introduced by Derrida and Brunet (2005, Einstein aujourd'hui (Les Ulis: EDP Sciences)), not only allows us to sort out the influence of initial conditions on large deviation functions but also allows us to pinpoint various restrictions bearing upon the range of validity of the Fluctuation Relation.

  5. Large Deviations and Transitions Between Equilibria for Stochastic Landau-Lifshitz-Gilbert Equation

    NASA Astrophysics Data System (ADS)

    Brzeźniak, Zdzisław; Goldys, Ben; Jegaraj, Terence

    2017-11-01

    We study a stochastic Landau-Lifshitz equation on a bounded interval and with finite dimensional noise. We first show that there exists a pathwise unique solution to this equation and that this solution enjoys the maximal regularity property. Next, we prove the large deviations principle for the small noise asymptotic of solutions using the weak convergence method. An essential ingredient of the proof is the compactness, or weak to strong continuity, of the solution map for a deterministic Landau-Lifschitz equation when considered as a transformation of external fields. We then apply this large deviations principle to show that small noise can cause magnetisation reversal. We also show the importance of the shape anisotropy parameter for reducing the disturbance of the solution caused by small noise. The problem is motivated by applications from ferromagnetic nanowires to the fabrication of magnetic memories.

  6. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    NASA Astrophysics Data System (ADS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-11-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.

  7. Hessian matrix approach for determining error field sensitivity to coil deviations

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi

    2018-05-01

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.

  8. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  9. Decreased fetal hemoglobin over time among youth with sickle cell disease on hydroxyurea is associated with higher urgent hospital use.

    PubMed

    Green, Nancy S; Manwani, Deepa; Qureshi, Mahvish; Ireland, Karen; Sinha, Arpan; Smaldone, Arlene M

    2016-12-01

    Hydroxyurea (HU) induces dose-dependent increased fetal hemoglobin (HbF) for sickle cell disease (SCD). Large deviation from historical personal best (PBest) HbF, a clinic-based version of maximum dose, may identify a subset with suboptimal HU adherence over time. Retrospective clinical data from youth ages 10-18 years prescribed HU at two centers were extracted from medical records at three time points: pre-HU initiation, PBest and a recent assessment. Decrease from PBest HbF of 20% or more at recent assessment despite stable dosing was designated as high deviation from PBest. Acute hospital use was compared between 1-year periods, pre-HU and ±6 months for PBest and recent assessment. Groups were compared using descriptive and bivariate nonparametric statistics. Seventy-five youth, mean HU duration 5.9 years, met eligibility criteria. Mean ages of HU initiation, PBest and recent assessment were 8.0, 10.9 and 13.9 years, respectively. Despite stable dosing, average HbF of 19.5% at PBest overall declined by 31.8% at recent assessment. PBest HbF declined by 11.7 and 40.1% in two groups, the latter comprised 70.7% of the sample, had lower pre-HU and recent HbF and higher dosing. They experienced more urgent hospital use during the year framing recent assessment than during PBest; these findings were supported by sensitivity analysis. Decline from PBest HbF is a novel approach to assess HU effectiveness, is common among youth and may represent suboptimal adherence. Larger prospective studies using additional adherence measures are needed to confirm our approach of tracking HbF deviation over time and to define an appropriate cutoff. © 2016 Wiley Periodicals, Inc.

  10. Specializing network analysis to detect anomalous insider actions

    PubMed Central

    Chen, You; Nyemba, Steve; Zhang, Wen; Malin, Bradley

    2012-01-01

    Collaborative information systems (CIS) enable users to coordinate efficiently over shared tasks in complex distributed environments. For flexibility, they provide users with broad access privileges, which, as a side-effect, leave such systems vulnerable to various attacks. Some of the more damaging malicious activities stem from internal misuse, where users are authorized to access system resources. A promising class of insider threat detection models for CIS focuses on mining access patterns from audit logs, however, current models are limited in that they assume organizations have significant resources to generate label cases for training classifiers or assume the user has committed a large number of actions that deviate from “normal” behavior. In lieu of the previous assumptions, we introduce an approach that detects when specific actions of an insider deviate from expectation in the context of collaborative behavior. Specifically, in this paper, we introduce a specialized network anomaly detection model, or SNAD, to detect such events. This approach assesses the extent to which a user influences the similarity of the group of users that access a particular record in the CIS. From a theoretical perspective, we show that the proposed model is appropriate for detecting insider actions in dynamic collaborative systems. From an empirical perspective, we perform an extensive evaluation of SNAD with the access logs of two distinct environments: the patient record access logs a large electronic health record system (6,015 users, 130,457 patients and 1,327,500 accesses) and the editing logs of Wikipedia (2,394,385 revisors, 55,200 articles and 6,482,780 revisions). We compare our model with several competing methods and demonstrate SNAD is significantly more effective: on average it achieves 20–30% greater area under an ROC curve. PMID:23399988

  11. Efficiency of multi-breed genomic selection for dairy cattle breeds with different sizes of reference population.

    PubMed

    Hozé, C; Fritz, S; Phocas, F; Boichard, D; Ducrocq, V; Croiseau, P

    2014-01-01

    Single-breed genomic selection (GS) based on medium single nucleotide polymorphism (SNP) density (~50,000; 50K) is now routinely implemented in several large cattle breeds. However, building large enough reference populations remains a challenge for many medium or small breeds. The high-density BovineHD BeadChip (HD chip; Illumina Inc., San Diego, CA) containing 777,609 SNP developed in 2010 is characterized by short-distance linkage disequilibrium expected to be maintained across breeds. Therefore, combining reference populations can be envisioned. A population of 1,869 influential ancestors from 3 dairy breeds (Holstein, Montbéliarde, and Normande) was genotyped with the HD chip. Using this sample, 50K genotypes were imputed within breed to high-density genotypes, leading to a large HD reference population. This population was used to develop a multi-breed genomic evaluation. The goal of this paper was to investigate the gain of multi-breed genomic evaluation for a small breed. The advantage of using a large breed (Normande in the present study) to mimic a small breed is the large potential validation population to compare alternative genomic selection approaches more reliably. In the Normande breed, 3 training sets were defined with 1,597, 404, and 198 bulls, and a unique validation set included the 394 youngest bulls. For each training set, estimated breeding values (EBV) were computed using pedigree-based BLUP, single-breed BayesC, or multi-breed BayesC for which the reference population was formed by any of the Normande training data sets and 4,989 Holstein and 1,788 Montbéliarde bulls. Phenotypes were standardized by within-breed genetic standard deviation, the proportion of polygenic variance was set to 30%, and the estimated number of SNP with a nonzero effect was about 7,000. The 2 genomic selection (GS) approaches were performed using either the 50K or HD genotypes. The correlations between EBV and observed daughter yield deviations (DYD) were computed for 6 traits and using the different prediction approaches. Compared with pedigree-based BLUP, the average gain in accuracy with GS in small populations was 0.057 for the single-breed and 0.086 for multi-breed approach. This gain was up to 0.193 and 0.209, respectively, with the large reference population. Improvement of EBV prediction due to the multi-breed evaluation was higher for animals not closely related to the reference population. In the case of a breed with a small reference population size, the increase in correlation due to multi-breed GS was 0.141 for bulls without their sire in reference population compared with 0.016 for bulls with their sire in reference population. These results demonstrate that multi-breed GS can contribute to increase genomic evaluation accuracy in small breeds. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. A framework for the direct evaluation of large deviations in non-Markovian processes

    NASA Astrophysics Data System (ADS)

    Cavallaro, Massimo; Harris, Rosemary J.

    2016-11-01

    We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.

  13. Efficient characterisation of large deviations using population dynamics

    NASA Astrophysics Data System (ADS)

    Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.

    2018-05-01

    We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.

  14. Evaluation of True Power Luminous Efficiency from Experimental Luminance Values

    NASA Astrophysics Data System (ADS)

    Tsutsui, Tetsuo; Yamamato, Kounosuke

    1999-05-01

    A method for obtaining true external power luminous efficiencyfrom experimentally obtained luminance in organic light-emittingdiodes (LEDs) wasdemonstrated. Conventional two-layer organic LEDs with different electron-transport layer thicknesses wereprepared. Spatial distributions of emission intensities wereobserved. The large deviation in both emission spectra and spatialemission patterns were observed when the electron-transport layerthickness was varied. The deviation of emission patterns from thestandard Lambertian pattern was found to cause overestimations ofpower luminous efficiencies as large as 30%. A method for evaluatingcorrection factors was proposed.

  15. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    PubMed

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  16. Comparison and combination of "direct" and fragment based local correlation methods: Cluster in molecules and domain based local pair natural orbital perturbation and coupled cluster theories

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Becker, Ute; Neese, Frank

    2018-03-01

    Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.

  17. Assessment of Stable Isotope Distribution in Complex Systems

    NASA Astrophysics Data System (ADS)

    He, Y.; Cao, X.; Wang, J.; Bao, H.

    2017-12-01

    Biomolecules in living organisms have the potential to approach chemical steady state and even apparent isotope equilibrium because enzymatic reactions are intrinsically reversible. If an apparent local equilibrium can be identified, enzymatic reversibility and its controlling factors may be quantified, which helps to understand complex biochemical processes. Earlier research on isotope fractionation tends to focus on specific process and compare mostly two different chemical species. Using linear regression, "Thermodynamic order", which refers to correlated δ13C and 13β values, has been proposed to be present among many biomolecules by Galimov et al. However, the concept "thermodynamic order" they proposed and the approach they used has been questioned. Here, we propose that the deviation of a complex system from its equilibrium state can be rigorously described as a graph problem as is applied in discrete mathematics. The deviation of isotope distribution from equilibrium state and apparent local isotope equilibrium among a subset of biomolecules can be assessed using an apparent fractionation difference matrix (|Δα|). Applying the |Δα| matrix analysis to earlier published data of amino acids, we show the existence of apparent local equilibrium among different amino acids in potato and a kind of green alga. The existence of apparent local equilibrium is in turn consistent with the notion that enzymatic reactions can be reversible even in living systems. The result also implies that previous emphasis on external carbon source intake may be misplaced when studying isotope distribution in physiology. In addition to the identification of local equilibrium among biomolecules, the difference matrix approach has the potential to explore chemical or isotope equilibrium state in extraterrestrial bodies, to distinguish living from non-living systems, and to classify living species. This approach will benefit from large numbers of systematic data and advanced pattern recognition techniques.

  18. Hessian matrix approach for determining error field sensitivity to coil deviations.

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; ...

    2018-03-15

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less

  19. Hessian matrix approach for determining error field sensitivity to coil deviations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less

  20. Heterogeneous dynamics of ionic liquids: A four-point time correlation function approach

    NASA Astrophysics Data System (ADS)

    Liu, Jiannan; Willcox, Jon A. L.; Kim, Hyung J.

    2018-05-01

    Many ionic liquids show behavior similar to that of glassy systems, e.g., large and long-lasted deviations from Gaussian dynamics and clustering of "mobile" and "immobile" groups of ions. Herein a time-dependent four-point density correlation function—typically used to characterize glassy systems—is implemented for the ionic liquids, choline acetate, and 1-butyl-3-methylimidazolium acetate. Dynamic correlation beyond the first ionic solvation shell on the time scale of nanoseconds is found in the ionic liquids, revealing the cooperative nature of ion motions. The traditional solvent, acetonitrile, on the other hand, shows a much shorter length-scale that decays after a few picoseconds.

  1. Probing high scale physics with top quarks at the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Dong, Zhe

    With the Large Hadron Collider (LHC) running at TeV scale, we are expecting to find the deviations from the Standard Model in the experiments, and understanding what is the origin of these deviations. Being the heaviest elementary particle observed so far in the experiments with the mass at the electroweak scale, top quark is a powerful probe for new phenomena of high scale physics at the LHC. Therefore, we concentrate on studying the high scale physics phenomena with top quark pair production or decay at the LHC. In this thesis, we study the discovery potential of string resonances decaying to t/tbar final state, and examine the possibility of observing baryon-number-violating top-quark production or decay, at the LHC. We point out that string resonances for a string scale below 4 TeV can be detected via the t/tbar channel, by reconstructing center-of-mass frame kinematics of the resonances from either the t/tbar semi-leptonic decay or recent techniques of identifying highly boosted tops. For the study of baryon-number-violating processes, by a model independent effective approach and focusing on operators with minimal mass-dimension, we find that corresponding effective coefficients could be directly probed at the LHC already with an integrated luminosity of 1 inverse femtobarns at 7 TeV, and further constrained with 30 (100) inverse femtobarns at 7 (14) TeV.

  2. Climate change enhances interannual variability of the Nile river flow

    NASA Astrophysics Data System (ADS)

    Siam, Mohamed S.; Eltahir, Elfatih A. B.

    2017-04-01

    The human population living in the Nile basin countries is projected to double by 2050, approaching one billion. The increase in water demand associated with this burgeoning population will put significant stress on the available water resources. Potential changes in the flow of the Nile River as a result of climate change may further strain this critical situation. Here, we present empirical evidence from observations and consistent projections from climate model simulations suggesting that the standard deviation describing interannual variability of total Nile flow could increase by 50% (+/-35%) (multi-model ensemble mean +/- 1 standard deviation) in the twenty-first century compared to the twentieth century. We attribute the relatively large change in interannual variability of the Nile flow to projected increases in future occurrences of El Niño and La Niña events and to observed teleconnection between the El Niño-Southern Oscillation and Nile River flow. Adequacy of current water storage capacity and plans for additional storage capacity in the basin will need to be re-evaluated given the projected enhancement of interannual variability in the future flow of the Nile river.

  3. Deviation pattern approach for optimizing perturbative terms of QCD renormalization group invariant observables

    NASA Astrophysics Data System (ADS)

    Khellat, M. R.; Mirjalili, A.

    2017-03-01

    We first consider the idea of renormalization group-induced estimates, in the context of optimization procedures, for the Brodsky-Lepage-Mackenzie approach to generate higher-order contributions to QCD perturbative series. Secondly, we develop the deviation pattern approach (DPA) in which through a series of comparisons between lowerorder RG-induced estimates and the corresponding analytical calculations, one could modify higher-order RG-induced estimates. Finally, using the normal estimation procedure and DPA, we get estimates of αs4 corrections for the Bjorken sum rule of polarized deep-inelastic scattering and for the non-singlet contribution to the Adler function.

  4. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  5. Structural Refinement of Membrane Proteins by Restrained Molecular Dynamics and Solvent Accessibility Data

    PubMed Central

    Sompornpisut, Pornthep; Roux, Benoît; Perozo, Eduardo

    2008-01-01

    We present an approach for incorporating solvent accessibility data from electron paramagnetic resonance experiments in the structural refinement of membrane proteins through restrained molecular dynamics simulations. The restraints have been parameterized from oxygen (ΠO2) and nickel-ethylenediaminediacetic acid (ΠNiEdda) collision frequencies, as indicators of lipid or aqueous exposed spin-label sites. These are enforced through interactions between a pseudoatom representation of the covalently attached Nitroxide spin-label and virtual “solvent” particles corresponding to O2 and NiEdda in the surrounding environment. Interactions were computed using an empirical potential function, where the parameters have been optimized to account for the different accessibilities of the spin-label pseudoatoms to the surrounding environment. This approach, “pseudoatom-driven solvent accessibility refinement”, was validated by refolding distorted conformations of the Streptomyces lividans potassium channel (KcsA), corresponding to a range of 2–30 Å root mean-square deviations away from the native structure. Molecular dynamics simulations based on up to 58 electron paramagnetic resonance restraints derived from spin-label mutants were able to converge toward the native structure within 1–3 Å root mean-square deviations with minimal computational cost. The use of energy-based ranking and structure similarity clustering as selection criteria helped in the convergence and identification of correctly folded structures from a large number of simulations. This approach can be applied to a variety of integral membrane protein systems, regardless of oligomeric state, and should be particularly useful in calculating conformational changes from a known reference crystal structure. PMID:18676641

  6. Loss aversion, large deviation preferences and optimal portfolio weights for some classes of return processes

    NASA Astrophysics Data System (ADS)

    Duffy, Ken; Lobunets, Olena; Suhov, Yuri

    2007-05-01

    We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.

  7. Gait analysis in children with cerebral palsy.

    PubMed

    Armand, Stéphane; Decoulon, Geraldo; Bonnefoy-Mazure, Alice

    2016-12-01

    Cerebral palsy (CP) children present complex and heterogeneous motor disorders that cause gait deviations.Clinical gait analysis (CGA) is needed to identify, understand and support the management of gait deviations in CP. CGA assesses a large amount of quantitative data concerning patients' gait characteristics, such as video, kinematics, kinetics, electromyography and plantar pressure data.Common gait deviations in CP can be grouped into the gait patterns of spastic hemiplegia (drop foot, equinus with different knee positions) and spastic diplegia (true equinus, jump, apparent equinus and crouch) to facilitate communication. However, gait deviations in CP tend to be a continuum of deviations rather than well delineated groups. To interpret CGA, it is necessary to link gait deviations to clinical impairments and to distinguish primary gait deviations from compensatory strategies.CGA does not tell us how to treat a CP patient, but can provide objective identification of gait deviations and further the understanding of gait deviations. Numerous treatment options are available to manage gait deviations in CP. Generally, treatments strive to limit secondary deformations, re-establish the lever arm function and preserve muscle strength.Additional roles of CGA are to better understand the effects of treatments on gait deviations. Cite this article: Armand S, Decoulon G, Bonnefoy-Mazure A. Gait analysis in children with cerebral palsy. EFORT Open Rev 2016;1:448-460. DOI: 10.1302/2058-5241.1.000052.

  8. CDI Sensitivity and Crosstrack Error on Nonprecision Approaches

    DOT National Transportation Integrated Search

    1991-01-01

    This study was conducted to determine the influence of course deviation : indicator (CDI) sensitivity on pilot tracking error during nonprecision approaches. : Twelve pilots flew an instrumented single-engine airplane on 144 approaches at six : diffe...

  9. Chapter 17: Bioimage Informatics for Systems Pharmacology

    PubMed Central

    Li, Fuhai; Yin, Zheng; Jin, Guangxu; Zhao, Hong; Wong, Stephen T. C.

    2013-01-01

    Recent advances in automated high-resolution fluorescence microscopy and robotic handling have made the systematic and cost effective study of diverse morphological changes within a large population of cells possible under a variety of perturbations, e.g., drugs, compounds, metal catalysts, RNA interference (RNAi). Cell population-based studies deviate from conventional microscopy studies on a few cells, and could provide stronger statistical power for drawing experimental observations and conclusions. However, it is challenging to manually extract and quantify phenotypic changes from the large amounts of complex image data generated. Thus, bioimage informatics approaches are needed to rapidly and objectively quantify and analyze the image data. This paper provides an overview of the bioimage informatics challenges and approaches in image-based studies for drug and target discovery. The concepts and capabilities of image-based screening are first illustrated by a few practical examples investigating different kinds of phenotypic changes caEditorsused by drugs, compounds, or RNAi. The bioimage analysis approaches, including object detection, segmentation, and tracking, are then described. Subsequently, the quantitative features, phenotype identification, and multidimensional profile analysis for profiling the effects of drugs and targets are summarized. Moreover, a number of publicly available software packages for bioimage informatics are listed for further reference. It is expected that this review will help readers, including those without bioimage informatics expertise, understand the capabilities, approaches, and tools of bioimage informatics and apply them to advance their own studies. PMID:23633943

  10. Dissecting gene-environment interactions: A penalized robust approach accounting for hierarchical structures.

    PubMed

    Wu, Cen; Jiang, Yu; Ren, Jie; Cui, Yuehua; Ma, Shuangge

    2018-02-10

    Identification of gene-environment (G × E) interactions associated with disease phenotypes has posed a great challenge in high-throughput cancer studies. The existing marginal identification methods have suffered from not being able to accommodate the joint effects of a large number of genetic variants, while some of the joint-effect methods have been limited by failing to respect the "main effects, interactions" hierarchy, by ignoring data contamination, and by using inefficient selection techniques under complex structural sparsity. In this article, we develop an effective penalization approach to identify important G × E interactions and main effects, which can account for the hierarchical structures of the 2 types of effects. Possible data contamination is accommodated by adopting the least absolute deviation loss function. The advantage of the proposed approach over the alternatives is convincingly demonstrated in both simulation and a case study on lung cancer prognosis with gene expression measurements and clinical covariates under the accelerated failure time model. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Combining control input with flight path data to evaluate pilot performance in transport aircraft.

    PubMed

    Ebbatson, Matt; Harris, Don; Huddlestone, John; Sears, Rodney

    2008-11-01

    When deriving an objective assessment of piloting performance from flight data records, it is common to employ metrics which purely evaluate errors in flight path parameters. The adequacy of pilot performance is evaluated from the flight path of the aircraft. However, in large jet transport aircraft these measures may be insensitive and require supplementing with frequency-based measures of control input parameters. Flight path and control input data were collected from pilots undertaking a jet transport aircraft conversion course during a series of symmetric and asymmetric approaches in a flight simulator. The flight path data were analyzed for deviations around the optimum flight path while flying an instrument landing approach. Manipulation of the flight controls was subject to analysis using a series of power spectral density measures. The flight path metrics showed no significant differences in performance between the symmetric and asymmetric approaches. However, control input frequency domain measures revealed that the pilots employed highly different control strategies in the pitch and yaw axes. The results demonstrate that to evaluate pilot performance fully in large aircraft, it is necessary to employ performance metrics targeted at both the outer control loop (flight path) and the inner control loop (flight control) parameters in parallel, evaluating both the product and process of a pilot's performance.

  12. An Ambulatory Method of Identifying Anterior Cruciate Ligament Reconstructed Gait Patterns

    PubMed Central

    Patterson, Matthew R.; Delahunt, Eamonn; Sweeney, Kevin T.; Caulfield, Brian

    2014-01-01

    The use of inertial sensors to characterize pathological gait has traditionally been based on the calculation of temporal and spatial gait variables from inertial sensor data. This approach has proved successful in the identification of gait deviations in populations where substantial differences from normal gait patterns exist; such as in Parkinsonian gait. However, it is not currently clear if this approach could identify more subtle gait deviations, such as those associated with musculoskeletal injury. This study investigates whether additional analysis of inertial sensor data, based on quantification of gyroscope features of interest, would provide further discriminant capability in this regard. The tested cohort consisted of a group of anterior cruciate ligament reconstructed (ACL-R) females and a group of non-injured female controls, each performed ten walking trials. Gait performance was measured simultaneously using inertial sensors and an optoelectronic marker based system. The ACL-R group displayed kinematic and kinetic deviations from the control group, but no temporal or spatial deviations. This study demonstrates that quantification of gyroscope features can successfully identify changes associated with ACL-R gait, which was not possible using spatial or temporal variables. This finding may also have a role in other clinical applications where small gait deviations exist. PMID:24451464

  13. In Terms of the Logarithmic Mean Annual Seismicity Rate and Its Standard Deviation to Present the Gutenberg-Richter Relation

    NASA Astrophysics Data System (ADS)

    Chen, K. P.; Chang, W. Y.; Tsai, Y. B.

    2016-12-01

    The main purpose of this study is to apply an innovative approach to assess the median annual seismicity rates and their dispersions for Taiwan earthquakes in different depth ranges. This approach explicitly represents the Gutenberg-Richter (G-R) relation in terms of both the logarithmic mean annual seismicity rate and its standard deviation, instead of just the arithmetic mean. We use the high-quality seismicity data obtained by the Institute of Earth Sciences (IES) and the Central Weather Bureau (CWB) in an earthquake catalog with homogenized moment magnitudes from 1975 to 2014 for our study. The selected data set is shown to be complete for Mw>3.0. We first use it to illustrate the merits of our new approach for dampening the influence of spuriously large or small event numbers in individual years on the determination of median annual seismicity rate and its standard deviation. We further show that the logarithmic annual seismicity rates indeed possess a well-behaved lognormal distribution. The final results are summarized as follows: log10N=5.75-0.90Mw+/-(0.245-0.01Mw) for focal depth 0 300 km; log10N=5.78-0.94Mw+/-(0.195+0.01Mw) for focal depth 0-35 km; log10N=4.72-0.89Mw+/-(-0.075+0.075Mw) for focal depth 35-70 km; and log10N=4.69-0.88Mw+/-(-0.47+0.16Mw) for focal depth 70-300 km. Above results show distinctly different values for the parameters a and b in the G-R relations for Taiwan earthquakes in different depth ranges. These analytical equations can be readily used for comprehensive probabilistic seismic hazard assessment. Furthermore, a numerical table on the corresponding median annual seismicity rates and their upper and lower bounds at median +/- one standard deviation levels, as calculated from above analytical equations, is presented at the end. This table offers an overall glance of the estimated median annual seismicity rates and their dispersions for Taiwan earthquakes of various magnitudes and focal depths. It is interesting to point out that the seismicity rate of crustal earthquakes, which tend to contribute most hazards, accounts for only about 74% of the overall seismicity rate in Taiwan. Accordingly, direct use of the entire earthquake catalog without differentiating the focal depth may result in substantial overestimates of potential seismic hazards.

  14. Tomographic PIV: particles versus blobs

    NASA Astrophysics Data System (ADS)

    Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien

    2014-08-01

    We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.

  15. Comparative analysis of the processing accuracy of high strength metal sheets by AWJ, laser and plasma

    NASA Astrophysics Data System (ADS)

    Radu, M. C.; Schnakovszky, C.; Herghelegiu, E.; Tampu, N. C.; Zichil, V.

    2016-08-01

    Experimental tests were carried out on two high-strength steel materials (Ramor 400 and Ramor 550). Quantification of the dimensional accuracy was achieved by measuring the deviations from some geometric parameters of part (two lengths and two radii). It was found that in case of Ramor 400 steel, at the jet inlet, the deviations from the part radii are quite small for all the three analysed processes. Instead for the linear dimensions, the deviations are small only in case of laser cutting. At the jet outlet, the deviations raised in small amount compared to those obtained at the jet inlet for both materials as well as for all the three processes. Related to Ramor 550 steel, at the jet inlet the deviations from the part radii are very small in case of AWJ and laser cutting but larger in case of plasma cutting. At the jet outlet, the deviations from the part radii are very small for all processes; in case of linear dimensions, there was obtained very small deviations only in the case of laser processing, the other two processes leading to very large deviations.

  16. Revisiting the time until fixation of a neutral mutant in a finite population - A coalescent theory approach.

    PubMed

    Greenbaum, Gili

    2015-09-07

    Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets

    ERIC Educational Resources Information Center

    Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad

    2017-01-01

    Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…

  18. Simulation of reflecting surface deviations of centimeter-band parabolic space radiotelescope (SRT) with the large-size mirror

    NASA Astrophysics Data System (ADS)

    Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.

    2017-11-01

    he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.

  19. Thermal Texture Selection and Correction for Building Facade Inspection Based on Thermal Radiant Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, D.; Jarzabek-Rychard, M.; Schneider, D.; Maas, H.-G.

    2018-05-01

    An automatic building façade thermal texture mapping approach, using uncooled thermal camera data, is proposed in this paper. First, a shutter-less radiometric thermal camera calibration method is implemented to remove the large offset deviations caused by changing ambient environment. Then, a 3D façade model is generated from a RGB image sequence using structure-from-motion (SfM) techniques. Subsequently, for each triangle in the 3D model, the optimal texture is selected by taking into consideration local image scale, object incident angle, image viewing angle as well as occlusions. Afterwards, the selected textures can be further corrected using thermal radiant characteristics. Finally, the Gauss filter outperforms the voted texture strategy at the seams smoothing and thus for instance helping to reduce the false alarm rate in façade thermal leakages detection. Our approach is evaluated on a building row façade located at Dresden, Germany.

  20. Do Hypervolumes Have Holes?

    PubMed

    Blonder, Benjamin

    2016-04-01

    Hypervolumes are used widely to conceptualize niches and trait distributions for both species and communities. Some hypervolumes are expected to be convex, with boundaries defined by only upper and lower limits (e.g., fundamental niches), while others are expected to be maximal, with boundaries defined by the limits of available space (e.g., potential niches). However, observed hypervolumes (e.g., realized niches) could also have holes, defined as unoccupied hyperspace representing deviations from these expectations that may indicate unconsidered ecological or evolutionary processes. Detecting holes in more than two dimensions has to date not been possible. I develop a mathematical approach, implemented in the hypervolume R package, to infer holes in large and high-dimensional data sets. As a demonstration analysis, I assess evidence for vacant niches in a Galapagos finch community on Isabela Island. These mathematical concepts and software tools for detecting holes provide approaches for addressing contemporary research questions across ecology and evolutionary biology.

  1. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  2. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  3. Determination of real machine-tool settings and minimization of real surface deviation by computerized inspection

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Kuan, Chihping; Zhang, YI

    1991-01-01

    A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.

  4. Large Deviations in Weakly Interacting Boundary Driven Lattice Gases

    NASA Astrophysics Data System (ADS)

    van Wijland, Frédéric; Rácz, Zoltán

    2005-01-01

    One-dimensional, boundary-driven lattice gases with local interactions are studied in the weakly interacting limit. The density profiles and the correlation functions are calculated to first order in the interaction strength for zero-range and short-range processes differing only in the specifics of the detailed-balance dynamics. Furthermore, the effective free-energy (large-deviation function) and the integrated current distribution are also found to this order. From the former, we find that the boundary drive generates long-range correlations only for the short-range dynamics while the latter provides support to an additivity principle recently proposed by Bodineau and Derrida.

  5. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  6. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  7. Bethe Ansatz for the Weakly Asymmetric Simple Exclusion Process and Phase Transition in the Current Distribution

    NASA Astrophysics Data System (ADS)

    Simon, Damien

    2011-03-01

    The probability distribution of the current in the asymmetric simple exclusion process is expected to undergo a phase transition in the regime of weak asymmetry of the jumping rates. This transition was first predicted by Bodineau and Derrida using a linear stability analysis of the hydrodynamical limit of the process and further arguments have been given by Mallick and Prolhac. However it has been impossible so far to study what happens after the transition. The present paper presents an analysis of the large deviation function of the current on both sides of the transition from a Bethe Ansatz approach of the weak asymmetry regime of the exclusion process.

  8. Using trading strategies to detect phase transitions in financial markets.

    PubMed

    Forró, Z; Woodard, R; Sornette, D

    2015-04-01

    We show that the log-periodic power law singularity model (LPPLS), a mathematical embodiment of positive feedbacks between agents and of their hierarchical dynamical organization, has a significant predictive power in financial markets. We find that LPPLS-based strategies significantly outperform the randomized ones and that they are robust with respect to a large selection of assets and time periods. The dynamics of prices thus markedly deviate from randomness in certain pockets of predictability that can be associated with bubble market regimes. Our hybrid approach, marrying finance with the trading strategies, and critical phenomena with LPPLS, demonstrates that targeting information related to phase transitions enables the forecast of financial bubbles and crashes punctuating the dynamics of prices.

  9. A sequential quadratic programming algorithm using an incomplete solution of the subproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, W.; Prieto, F.J.

    1993-05-01

    We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less

  10. Using trading strategies to detect phase transitions in financial markets

    NASA Astrophysics Data System (ADS)

    Forró, Z.; Woodard, R.; Sornette, D.

    2015-04-01

    We show that the log-periodic power law singularity model (LPPLS), a mathematical embodiment of positive feedbacks between agents and of their hierarchical dynamical organization, has a significant predictive power in financial markets. We find that LPPLS-based strategies significantly outperform the randomized ones and that they are robust with respect to a large selection of assets and time periods. The dynamics of prices thus markedly deviate from randomness in certain pockets of predictability that can be associated with bubble market regimes. Our hybrid approach, marrying finance with the trading strategies, and critical phenomena with LPPLS, demonstrates that targeting information related to phase transitions enables the forecast of financial bubbles and crashes punctuating the dynamics of prices.

  11. Automatic M1-SO Montage Headgear for Transcranial Direct Current Stimulation (TDCS) Suitable for Home and High-Throughput In-Clinic Applications.

    PubMed

    Knotkova, Helena; Riggs, Alexa; Berisha, Destiny; Borges, Helen; Bernstein, Henry; Patel, Vaishali; Truong, Dennis Q; Unal, Gozde; Arce, Denis; Datta, Abhishek; Bikson, Marom

    2018-05-15

    Non-invasive transcranial direct current stimulation (tDCS) over the motor cortex is broadly investigated to modulate functional outcomes such as motor function, sleep characteristics, or pain. The most common montages that use two large electrodes (25-35 cm 2 ) placed over the area of motor cortex and contralateral supraorbital region (M1-SO montages) require precise measurements, usually using the 10-20 EEG system, which is cumbersome in clinics and not suitable for applications by patients at home. The objective was to develop and test novel headgear allowing for reproduction of the M1-SO montage without the 10-20 EEG measurements, neuronavigation, or TMS. Points C3/C4 of the 10-20 EEG system is the conventional reference for the M1 electrode. The headgear was designed using an orthogonal, fixed-angle approach for connection of frontal and coronal headgear components. The headgear prototype was evaluated for accuracy and replicability of the M1 electrode position in 600 repeated measurements compared to manually determined C3 in 30 volunteers. Computational modeling was used to estimate brain current flow at the mean and maximum recorded electrode placement deviations from C3. The headgear includes navigational points for accurate placement and assemblies to hold electrodes in the M1-SO position without measurement by the user. Repeated measurements indicated accuracy and replicability of the electrode position: the mean [SD] deviation of the M1 electrode (size 5 × 5 cm) from C3 was 1.57 [1.51] mm, median 1 mm. Computational modeling suggests that the potential deviation from C3 does not produce a significant change in brain current flow. The novel approach to M1-SO montage using a fixed-angle headgear not requiring measurements by patients or caregivers facilitates tDCS studies in home settings and can replace cumbersome C3 measurements for clinical tDCS applications. © 2018 International Neuromodulation Society.

  12. A Kalman filter approach for the determination of celestial reference frames

    NASA Astrophysics Data System (ADS)

    Soja, Benedikt; Gross, Richard; Jacobs, Christopher; Chin, Toshio; Karbon, Maria; Nilsson, Tobias; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The coordinate model of radio sources in International Celestial Reference Frames (ICRF), such as the ICRF2, has traditionally been a constant offset. While sufficient for a large part of radio sources considering current accuracy requirements, several sources exhibit significant temporal coordinate variations. In particular, the group of the so-called special handling sources is characterized by large fluctuations in the source positions. For these sources and for several from the "others" category of radio sources, a coordinate model that goes beyond a constant offset would be beneficial. However, due to the sheer amount of radio sources in catalogs like the ICRF2, and even more so with the upcoming ICRF3, it is difficult to find the most appropriate coordinate model for every single radio source. For this reason, we have developed a time series approach to the determination of celestial reference frames (CRF). We feed the radio source coordinates derived from single very long baseline interferometry (VLBI) sessions sequentially into a Kalman filter and smoother, retaining their full covariances. The estimation of the source coordinates is carried out with a temporal resolution identical to the input data, i.e. usually 1-4 days. The coordinates are assumed to behave like random walk processes, an assumption which has already successfully been made for the determination of terrestrial reference frames such as the JTRF2014. To be able to apply the most suitable process noise value for every single radio source, their statistical properties are analyzed by computing their Allan standard deviations (ADEV). Additional to the determination of process noise values, the ADEV allows drawing conclusions whether the variations in certain radio source positions significantly deviate from random walk processes. Our investigations also deal with other means of source characterization, such as the structure index, in order to derive a suitable process noise model. The Kalman filter CRFs resulting from the different approaches are compared among each other, to the original radio source position time series, as well as to a traditional CRF solution, in which the constant source positions are estimated in a global least squares adjustment.

  13. Foreign Object Damage Identification in Turbine Engines

    NASA Technical Reports Server (NTRS)

    Strack, William; Zhang, Desheng; Turso, James; Pavlik, William; Lopez, Isaac

    2005-01-01

    This report summarizes the collective work of a five-person team from different organizations examining the problem of detecting foreign object damage (FOD) events in turbofan engines from gas path thermodynamic and bearing accelerometer sensors, and determining the severity of damage to each component (diagnosis). Several detection and diagnostic approaches were investigated and a software tool (FODID) was developed to assist researchers detect/diagnose FOD events. These approaches include (1) fan efficiency deviation computed from upstream and downstream temperature/ pressure measurements, (2) gas path weighted least squares estimation of component health parameter deficiencies, (3) Kalman filter estimation of component health parameters, and (4) use of structural vibration signal processing to detect both large and small FOD events. The last three of these approaches require a significant amount of computation in conjunction with a physics-based analytic model of the underlying phenomenon the NPSS thermodynamic cycle code for approaches 1 to 3 and the DyRoBeS reduced-order rotor dynamics code for approach 4. A potential application of the FODID software tool, in addition to its detection/diagnosis role, is using its sensitivity results to help identify the best types of sensors and their optimum locations within the gas path, and similarly for bearing accelerometers.

  14. Rare Event Simulation for T-cell Activation

    NASA Astrophysics Data System (ADS)

    Lipsmeier, Florian; Baake, Ellen

    2009-02-01

    The problem of statistical recognition is considered, as it arises in immunobiology, namely, the discrimination of foreign antigens against a background of the body's own molecules. The precise mechanism of this foreign-self-distinction, though one of the major tasks of the immune system, continues to be a fundamental puzzle. Recent progress has been made by van den Berg, Rand, and Burroughs (J. Theor. Biol. 209:465-486, 2001), who modelled the probabilistic nature of the interaction between the relevant cell types, namely, T-cells and antigen-presenting cells (APCs). Here, the stochasticity is due to the random sample of antigens present on the surface of every APC, and to the random receptor type that characterises individual T-cells. It has been shown previously (van den Berg et al. in J. Theor. Biol. 209:465-486, 2001; Zint et al. in J. Math. Biol. 57:841-861, 2008) that this model, though highly idealised, is capable of reproducing important aspects of the recognition phenomenon, and of explaining them on the basis of stochastic rare events. These results were obtained with the help of a refined large deviation theorem and were thus asymptotic in nature. Simulations have, so far, been restricted to the straightforward simple sampling approach, which does not allow for sample sizes large enough to address more detailed questions. Building on the available large deviation results, we develop an importance sampling technique that allows for a convenient exploration of the relevant tail events by means of simulation. With its help, we investigate the mechanism of statistical recognition in some depth. In particular, we illustrate how a foreign antigen can stand out against the self background if it is present in sufficiently many copies, although no a priori difference between self and nonself is built into the model.

  15. Scattering and extinction properties of overfire soot in large buoyant turbulent diffusion flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, S.S.; Lin, K.C.; Faeth, G.M.

    1999-07-01

    Measurements of the scattering and extinction properties of soot at visible wavelengths (351.2--632.8 nm) were completed for soot in the overfire region of large buoyant turbulent diffusion flames burning in still air where soot properties are independent of position and characteristic flame residence time for a particular fuel. Flames fueled with both gas (acetylene, ethylene, propylene and butadiene) and liquid (benzene, toluene, cyclohexane and n-heptane) hydrocarbon fuels were considered during the experiments. The measurements were considered during the experiments. The measurements were used to evaluate Rayleigh-Debye-Gans/polydisperse-fractal-aggregate theory for the absorption and scattering properties of soot, finding good performance for themore » present test range which included primary particle size parameters as large as 0.46; in addition, effects of fuel type over the test range were comparable to experimental uncertainties. Fractal dimensions were properly independent of wavelength and yielded a mean value of 1.79 with a standard deviation of 0.05, which is in excellent agreement with earlier work. Dimensionless extinction coefficients were relatively independent of wavelength and yielded a mean value of 8.4 with a standard deviation of 1.5. Present refractive indices did not exhibit a resonance condition, seen for graphite, as the uv was approached. Values of the refractive index function for absorption, E(m), increased as wavelength increased and were comparable to most earlier measurements for wavelengths greater than 400 nm. Values of the refractive index function for scattering, F(m), agreed with earlier measurements at wavelengths of 450--550 nm but otherwise increased with increasing wavelength more rapidly than seen before.« less

  16. MUSiC - A general search for deviations from monte carlo predictions in CMS

    NASA Astrophysics Data System (ADS)

    Biallass, Philipp A.; CMS Collaboration

    2009-06-01

    A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.

  17. MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS

    NASA Astrophysics Data System (ADS)

    Hof, Carsten

    2009-05-01

    We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.

  18. Mapping opportunities and challenges for rewilding in Europe.

    PubMed

    Ceaușu, Silvia; Hofmann, Max; Navarro, Laetitia M; Carver, Steve; Verburg, Peter H; Pereira, Henrique M

    2015-08-01

    Farmland abandonment takes place across the world due to socio-economic and ecological drivers. In Europe agricultural and environmental policies aim to prevent abandonment and halt ecological succession. Ecological rewilding has been recently proposed as an alternative strategy. We developed a framework to assess opportunities for rewilding across different dimensions of wilderness in Europe. We mapped artificial light, human accessibility based on transport infrastructure, proportion of harvested primary productivity (i.e., ecosystem productivity appropriated by humans through agriculture or forestry), and deviation from potential natural vegetation in areas projected to be abandoned by 2040. At the continental level, the levels of artificial light were low and the deviation from potential natural vegetation was high in areas of abandonment. The relative importance of wilderness metrics differed regionally and was strongly connected to local environmental and socio-economic contexts. Large areas of projected abandonment were often located in or around Natura 2000 sites. Based on these results, we argue that management should be tailored to restore the aspects of wilderness that are lacking in each region. There are many remaining challenges regarding biodiversity in Europe, but megafauna species are already recovering. To further potentiate large-scale rewilding, Natura 2000 management would need to incorporate rewilding approaches. Our framework can be applied to assessing rewilding opportunities and challenges in other world regions, and our results could guide redirection of subsidies to manage social-ecological systems. © 2015 The Authors. Conservation Biology published by wiley Periodicals, Inc. on behalf of the Society for Conservation Biology.

  19. Effect of surface nano/micro-structuring on the early formation of microbial anodes with Geobacter sulfurreducens: Experimental and theoretical approaches.

    PubMed

    Champigneux, Pierre; Renault-Sentenac, Cyril; Bourrier, David; Rossi, Carole; Delia, Marie-Line; Bergel, Alain

    2018-06-01

    Smooth and nano-rough flat gold electrodes were manufactured with controlled Ra of 0.8 and 4.5nm, respectively. Further nano-rough surfaces (Ra 4.5nm) were patterned with arrays of micro-pillars 500μm high. All these electrodes were implemented in pure cultures of Geobacter sulfurreducens, under a constant potential of 0.1V/SCE and with a single addition of acetate 10mM to check the early formation of microbial anodes. The flat smooth electrodes produced an average current density of 0.9A·m -2 . The flat nano-rough electrodes reached 2.5A·m -2 on average, but with a large experimental deviation of ±2.0A·m -2 . This large deviation was due to the erratic colonization of the surface but, when settled on the surface, the cells displayed current density that was directly correlated to the biofilm coverage ratio. The micro-pillars considerably improved the experimental reproducibility by offering the cells a quieter environment, facilitating biofilm development. Current densities of up to 8.5A·m -2 (per projected surface area) were thus reached, in spite of rate limitation due to the mass transport of the buffering species, as demonstrated by numerical modelling. Nano-roughness combined with micro-structuring increased current density by a factor close to 10 with respect to the smooth flat surface. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Cosmological implications of a large complete quasar sample.

    PubMed

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  1. Moving standard deviation and moving sum of outliers as quality tools for monitoring analytical precision.

    PubMed

    Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-01

    An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  2. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  3. 'Birdwatching and baby-watching': Niko and Elisabeth Tinbergen's ethological approach to autism.

    PubMed

    Silverman, Chloe

    2010-06-01

    Biographers have largely dismissed Nikolaas 'Niko' Tinbergen's late research into the causes and treatment of autism, describing it as a deviation from his previous work, influenced by his personal desires.They have pointed to the incoherence of Tinbergen's assertions about best practices for treating autism, his lack of experience with children with autism, and his apparent embracing of psychogenic theories that the medical research community had largely abandoned. While these critiques have value, it is significant that Tinbergen himself saw his research as a logical extension of his seminal findings in the field of ethology, the science of animal behaviour. The reception of his theories, both positive and negative, was due less to their strengths or faults than to the fact that Tinbergen had inserted himself into a pre-existing and acrimonious debate in the autism research community. Debates about the relative role of environmental and hereditary factors in the aetiology of autism, and the implications of both for the efficacy of different treatments, had political and material significance for the success of parent organizations' lobbying efforts and financial support for research programmes. Tinbergen's approach was welcomed and even championed by a significant minority, who saw no problem with his ideas or methods.

  4. Complexation of Cd, Ni, and Zn by DOC in polluted groundwater: A comparison of approaches using resin exchange, aquifer material sorption, and computer speciation models (WHAM and MINTEQA2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, J.B.; Christensen, T.H.

    1999-11-01

    Complexation of cadmium (Cd), nickel (Ni), and zinc (Zn) by dissolved organic carbon (DOC) in leachate-polluted groundwater was measured using a resin equilibrium method and an aquifer material sorption technique. The first method is commonly used in complexation studies, while the second method better represents aquifer conditions. The two approaches gave similar results. Metal-DOC complexation was measured over a range of DOC concentrations using the resin equilibrium method, and the results were compared to simulations made by two speciation models containing default databases on metal-DOC complexes (WHAM and MINTEQA2). The WHAM model gave reasonable estimates of Cd and Ni complexationmore » by DOC for both leachate-polluted groundwater samples. The estimated effect of complexation differed less than 50% from the experimental values corresponding to a deviation on the activity of the free metal ion of a factor of 2.5. The effect of DOC complexation for Zn was largely overestimated by the WHAM model, and it was found that using a binding constant of 1.7 instead of the default value of 1.3 would improve the fit between the simulations and experimental data. The MINTEQA2 model gave reasonable predictions of the complexation of Cd and Zn by DOC, whereas deviations in the estimated activity of the free Ni{sup 2+} ion as compared to experimental results are up to a factor of 5.« less

  5. Testing the applicability of artificial intelligence techniques to the subject of erythemal ultraviolet solar radiation part one: the applicability of a fuzzy rule based approach.

    PubMed

    Riad, A M; Elminir, Hamdy K; Own, Hala S; Azzam, Yosry A

    2008-02-27

    This work presents the applicability of applying a fuzzy logic approach to the calculation of noontime erythemal UV irradiance for the plain areas of Egypt. When different combinations of data sets were examined from the test performance point of view, it was found that 91% of the whole series was estimated within a deviation of less than +/-10 mW/m(2), and 9% of these deviations lay within the range of +/-15 mW/m(2) to +/-25 mW/m(2).

  6. Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil

    2010-01-01

    We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach

  7. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi bodymore » system model and its included compensation method.« less

  8. Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

    PubMed Central

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.

    2016-01-01

    Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982

  9. Design of a practical model-observer-based image quality assessment method for CT imaging systems

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana

    2014-03-01

    The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.

  10. Particle-pair relative velocity measurement in high-Reynolds-number homogeneous and isotropic turbulence using 4-frame particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Dou, Zhongwang; Ireland, Peter J.; Bragg, Andrew D.; Liang, Zach; Collins, Lance R.; Meng, Hui

    2018-02-01

    The radial relative velocity (RV) between particles suspended in turbulent flow plays a critical role in droplet collision and growth. We present a simple and accurate approach to RV measurement in isotropic turbulence—planar 4-frame particle tracking velocimetry—using routine PIV hardware. It improves particle positioning and pairing accuracy over the 2-frame holographic approach by de Jong et al. (Int J Multiphas Flow 36:324-332; de Jong et al., Int J Multiphas Flow 36:324-332, 2010) without using high-speed cameras and lasers as in Saw et al. (Phys Fluids 26:111702, 2014). Homogeneous and isotropic turbulent flow ({R_λ }=357) in a new, fan-driven, truncated iscosahedron chamber was laden with either low-Stokes (mean St=0.09, standard deviation 0.05) or high-Stokes aerosols (mean St=3.46, standard deviation 0.57). For comparison, DNS was conducted under similar conditions ({R_λ }=398; St=0.10 and 3.00, respectively). Experimental RV probability density functions (PDF) and mean inward RV agree well with DNS. Mean inward RV increases with St at small particle separations, r, and decreases with St at large r, indicating the dominance of "path-history" and "inertial filtering" effects, respectively. However, at small r, the experimental mean inward RV trends higher than DNS, possibly due to the slight polydispersity of particles and finite light sheet thickness in experiments. To confirm this interpretation, we performed numerical experiments and found that particle polydispersity increases mean inward RV at small r, while finite laser thickness also overestimates mean inward RV at small r, This study demonstrates the feasibility of accurately measuring RV using routine hardware, and verifies, for the first time, the path-history and inertial filtering effects on particle-pair RV at large particle separations experimentally.

  11. Approaches to inspecting computed tomographic and magnetic resonance studies.

    PubMed

    Lamb, Christopher R; Dale, Vicki H M

    2013-01-01

    There is a need to better understand how to optimally inspect large image datasets. The aim of the present study was to complement experimental studies of visual perception by using an online questionnaire to collect opinions of practicing veterinary radiologists about the approaches they use when inspecting clinical computed X-ray tomography (CT) and/or magnetic resonance (MR) studies, and to test associations between radiologist's approaches and their training, experience, or caseload. Questionnaires were received from 90/454 (20%) American College of Veterinary Radiology (ACVR) Diplomates and 58/156 (37%) European College of Veterinary Diagnostic Imaging (ECVDI) Diplomates, providing 139 complete responses for CT studies and 116 for MR. Questionnaire responses differed for the following variables: specialty college, years since Board Certification, CT and MR caseload, and type of practice. ACVR Diplomates more frequently inspected multiple anatomic structures in CT and MR images before moving on to the next image, and ECVDI Diplomates more frequently inspected a specific anatomic structure through a series, then went back and checked another structure. A significant number of radiologists indicated that they initially ignore the history, adopt relatively rigid search patterns with emphasis on viewing images in a predetermined order with minimal deviation, and arrange series of images to facilitate comparisons between images, such as pre- and postcontrast images. Radiologists tended to adopt similar approaches for both CT and MR studies. Findings from this study could be used as foci for teaching novices how to approach large imaging studies, and provide guidance for case-based assessment of trainees. © 2013 Veterinary Radiology & Ultrasound.

  12. Scaling Deviations for Neutrino Reactions in Aysmptotically Free Field Theories

    DOE R&D Accomplishments Database

    Wilczek, F. A.; Zee, A.; Treiman, S. B.

    1974-11-01

    Several aspects of deep inelastic neutrino scattering are discussed in the framework of asymptotically free field theories. We first consider the growth behavior of the total cross sections at large energies. Because of the deviations from strict scaling which are characteristic of such theories the growth need not be linear. However, upper and lower bounds are established which rather closely bracket a linear growth. We next consider in more detail the expected pattern of scaling deviation for the structure functions and, correspondingly, for the differential cross sections. The analysis here is based on certain speculative assumptions. The focus is on qualitative effects of scaling breakdown as they may show up in the X and y distributions. The last section of the paper deals with deviations from the Callan-Gross relation.

  13. Uncertainties of Mayak urine data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir

    2008-01-01

    For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3more » to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.« less

  14. An efficient approach for improving virtual machine placement in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  15. Comparing Measures of Voice Quality From Sustained Phonation and Continuous Speech.

    PubMed

    Gerratt, Bruce R; Kreiman, Jody; Garellek, Marc

    2016-10-01

    The question of what type of utterance-a sustained vowel or continuous speech-is best for voice quality analysis has been extensively studied but with equivocal results. This study examines whether previously reported differences derive from the articulatory and prosodic factors occurring in continuous speech versus sustained phonation. Speakers with voice disorders sustained vowels and read sentences. Vowel samples were excerpted from the steadiest portion of each vowel in the sentences. In addition to sustained and excerpted vowels, a 3rd set of stimuli was created by shortening sustained vowel productions to match the duration of vowels excerpted from continuous speech. Acoustic measures were made on the stimuli, and listeners judged the severity of vocal quality deviation. Sustained vowels and those extracted from continuous speech contain essentially the same acoustic and perceptual information about vocal quality deviation. Perceived and/or measured differences between continuous speech and sustained vowels derive largely from voice source variability across segmental and prosodic contexts and not from variations in vocal fold vibration in the quasisteady portion of the vowels. Approaches to voice quality assessment by using continuous speech samples average across utterances and may not adequately quantify the variability they are intended to assess.

  16. Waveguide apparatuses and methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, James E.

    2016-05-10

    Optical fiber waveguides and related approaches are implemented to facilitate communication. As may be implemented in accordance with one or more embodiments, a waveguide has a substrate including a lattice structure having a plurality of lattice regions with a dielectric constant that is different than that of the substrate, a defect in the lattice, and one or more deviations from the lattice. The defect acts with trapped transverse modes (e.g., magnetic and/or electric modes) and facilitates wave propagation along a longitudinal direction while confining the wave transversely. The deviation(s) from the lattice produces additional modes and/or coupling effects.

  17. Integrated Planning for Telepresence with Time Delays

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Rabe, Kenneth J.

    2006-01-01

    Integrated planning and execution of teleoperations in space with time delays is shown. The topics include: 1) The Problem; 2) Future Robot Surgery? 3) Approach Overview; 4) Robonaut; 5) Normal Planning and Execution; 6) Planner Context; 7) Implementation; 8) Use of JSHOP2; 9) Monitoring and Testing GUI; 10) Normal sequence: first the supervisor acts; 11) then the robot; 12) Robot might be late; 13) Supervisor can work ahead; 14) Deviations from Plan; 15) Robot State Change Example; 16) Accomplished goals skipped in replan; 17) Planning continuity; 18) Supervisor Deviation From Plan; 19) Intentional Deviation; and 20) Infeasible states.

  18. Design and Development of Lateral Flight Director

    NASA Technical Reports Server (NTRS)

    Kudlinski, Kim E.; Ragsdale, William A.

    1999-01-01

    The current control law used for the flight director in the Boeing 737 simulator is inadequate with large localizer deviations near the middle marker. Eight different control laws are investigated. A heuristic method is used to design control laws that meet specific performance criteria. The design of each is described in detail. Several tests were performed and compared with the current control law for the flight director. The goal was to design a control law for the flight director that can be used with large localizer deviations near the middle marker, which could be caused by winds or wake turbulence, without increasing its level of complexity.

  19. On the Geometry of Chemical Reaction Networks: Lyapunov Function and Large Deviations

    NASA Astrophysics Data System (ADS)

    Agazzi, A.; Dembo, A.; Eckmann, J.-P.

    2018-04-01

    In an earlier paper, we proved the validity of large deviations theory for the particle approximation of quite general chemical reaction networks. In this paper, we extend its scope and present a more geometric insight into the mechanism of that proof, exploiting the notion of spherical image of the reaction polytope. This allows to view the asymptotic behavior of the vector field describing the mass-action dynamics of chemical reactions as the result of an interaction between the faces of this polytope in different dimensions. We also illustrate some local aspects of the problem in a discussion of Wentzell-Freidlin theory, together with some examples.

  20. Large deviations of a long-time average in the Ehrenfest urn model

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  1. Human influences on streamflow drought characteristics in England and Wales

    NASA Astrophysics Data System (ADS)

    Tijdeman, Erik; Hannaford, Jamie; Stahl, Kerstin

    2018-02-01

    Human influences can affect streamflow drought characteristics and propagation. The question is where, when and why? To answer these questions, the impact of different human influences on streamflow droughts were assessed in England and Wales, across a broad range of climate and catchments conditions. We used a dataset consisting of catchments with near-natural flow as well as catchments for which different human influences have been indicated in the metadata (Factors Affecting Runoff) of the UK National River Flow Archive (NRFA). A screening approach was applied on the streamflow records to identify human-influenced records with drought characteristics that deviated from those found for catchments with near-natural flow. Three different deviations were considered, specifically deviations in (1) the relationship between streamflow drought duration and the base flow index, BFI (specifically, BFIHOST, the BFI predicted from the hydrological properties of soils), (2) the correlation between streamflow and precipitation and (3) the temporal occurrence of streamflow droughts compared to precipitation droughts, i.e. an increase or decrease in streamflow drought months relative to precipitation drought months over the period of record. The identified deviations were then related to the indicated human influences. Results showed that the majority of catchments for which human influences were indicated did not show streamflow drought characteristics that deviated from those expected under near-natural conditions. For the catchments that did show deviating streamflow drought characteristics, prolonged streamflow drought durations were found in some of the catchments affected by groundwater abstractions. Weaker correlations between streamflow and precipitation were found for some of the catchments with reservoirs, water transfers or groundwater augmentation schemes. An increase in streamflow drought occurrence towards the end of their records was found for some of the catchments affected by groundwater abstractions and a decrease in streamflow drought occurrence for some of the catchments with either reservoirs or groundwater abstractions. In conclusion, the proposed screening approaches were sometimes successful in identifying streamflow records with deviating drought characteristics that are likely related to different human influences. However, a quantitative attribution of the impact of human influences on streamflow drought characteristics requires more detailed case-by-case information about the type and degree of all different human influences. Given that, in many countries, such information is often not readily accessible, the approaches adopted here could provide useful in targeting future efforts. In England and Wales specifically, the catchments with deviating streamflow drought characteristics identified in this study could serve as the starting point of detailed case study research.

  2. Accuracy of computer-aided design models of the jaws produced using ultra-low MDCT doses and ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Alfadda, Sara A; Ameen, Wadea; Hörmann, Romed; Puelacher, Wolfgang; Widmann, Gerlig

    2018-06-16

    To compare the surface of computer-aided design (CAD) models of the maxilla produced using ultra-low MDCT doses combined with filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) reconstruction techniques with that produced from a standard dose/FBP protocol. A cadaveric completely edentulous maxilla was imaged using a standard dose protocol (CTDIvol: 29.4 mGy) and FBP, in addition to 5 low dose test protocols (LD1-5) (CTDIvol: 4.19, 2.64, 0.99, 0.53, and 0.29 mGy) reconstructed with FBP, ASIR 50, ASIR 100, and MBIR. A CAD model from each test protocol was superimposed onto the reference model using the 'Best Fit Alignment' function. Differences between the test and reference models were analyzed as maximum and mean deviations, and root-mean-square of the deviations, and color-coded models were obtained which demonstrated the location, magnitude and direction of the deviations. Based upon the magnitude, size, and distribution of areas of deviations, CAD models from the following protocols were comparable to the reference model: FBP/LD1; ASIR 50/LD1 and LD2; ASIR 100/LD1, LD2, and LD3; MBIR/LD1. The following protocols demonstrated deviations mostly between 1-2 mm or under 1 mm but over large areas, and so their effect on surgical guide accuracy is questionable: FBP/LD2; MBIR/LD2, LD3, LD4, and LD5. The following protocols demonstrated large deviations over large areas and therefore were not comparable to the reference model: FBP/LD3, LD4, and LD5; ASIR 50/LD3, LD4, and LD5; ASIR 100/LD4, and LD5. When MDCT is used for CAD models of the jaws, dose reductions of 86% may be possible with FBP, 91% with ASIR 50, and 97% with ASIR 100. Analysis of the stability and accuracy of CAD/CAM surgical guides as directly related to the jaws is needed to confirm the results.

  3. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    NASA Technical Reports Server (NTRS)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  4. Offshore fatigue design turbulence

    NASA Astrophysics Data System (ADS)

    Larsen, Gunner C.

    2001-07-01

    Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.

  5. Ranking and validation of spallation models for isotopic production cross sections of heavy residua

    NASA Astrophysics Data System (ADS)

    Sharma, Sushil K.; Kamys, Bogusław; Goldenbaum, Frank; Filges, Detlef

    2017-07-01

    The production cross sections of isotopically identified residual nuclei of spallation reactions induced by 136Xe projectiles at 500AMeV on hydrogen target were analyzed in a two-step model. The first stage of the reaction was described by the INCL4.6 model of an intranuclear cascade of nucleon-nucleon and pion-nucleon collisions whereas the second stage was analyzed by means of four different models; ABLA07, GEM2, GEMINI++ and SMM. The quality of the data description was judged quantitatively using two statistical deviation factors; the H-factor and the M-factor. It was found that the present analysis leads to a different ranking of models as compared to that obtained from the qualitative inspection of the data reproduction. The disagreement was caused by sensitivity of the deviation factors to large statistical errors present in some of the data. A new deviation factor, the A factor, was proposed, that is not sensitive to the statistical errors of the cross sections. The quantitative ranking of models performed using the A-factor agreed well with the qualitative analysis of the data. It was concluded that using the deviation factors weighted by statistical errors may lead to erroneous conclusions in the case when the data cover a large range of values. The quality of data reproduction by the theoretical models is discussed. Some systematic deviations of the theoretical predictions from the experimental results are observed.

  6. Linear maps preserving maximal deviation and the Jordan structure of quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamhalter, Jan

    2012-12-15

    In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only onemore » numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnar.« less

  7. SU-E-J-32: Dosimetric Evaluation Based On Pre-Treatment Cone Beam CT for Spine Stereotactic Body Radiotherapy: Does Region of Interest Focus Matter?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnelli, A; Xia, P

    2015-06-15

    Purpose: Spine stereotactic body radiotherapy requires very conformal dose distributions and precise delivery. Prior to treatment, a KV cone-beam CT (KV-CBCT) is registered to the planning CT to provide image-guided positional corrections, which depend on selection of the region of interest (ROI) because of imperfect patient positioning and anatomical deformation. Our objective is to determine the dosimetric impact of ROI selections. Methods: Twelve patients were selected for this study with the treatment regions varied from C-spine to T-spine. For each patient, the KV-CBCT was registered to the planning CT three times using distinct ROIs: one encompassing the entire patient, amore » large ROI containing large bony anatomy, and a small target-focused ROI. Each registered CBCT volume, saved as an aligned dataset, was then sent to the planning system. The treated plan was applied to each dataset and dose was recalculated. The tumor dose coverage (percentage of target volume receiving prescription dose), maximum point dose to 0.03 cc of the spinal cord, and dose to 10% of the spinal cord volume (V10) for each alignment were compared to the original plan. Results: The average magnitude of tumor coverage deviation was 3.9%±5.8% with external contour, 1.5%±1.1% with large ROI, 1.3%±1.1% with small ROI. Spinal cord V10 deviation from plan was 6.6%±6.6% with external contour, 3.5%±3.1% with large ROI, and 1.2%±1.0% with small ROI. Spinal cord max point dose deviation from plan was: 12.2%±13.3% with external contour, 8.5%±8.4% with large ROI, and 3.7%±2.8% with small ROI. Conclusion: A small ROI focused on the target results in the smallest deviation from planned dose to target and cord although rotations at large distances from the targets were observed. It is recommended that image fusion during CBCT focus narrowly on the target volume to minimize dosimetric error. Improvement in patient setups may further reduce residual errors.« less

  8. Thin Disk Accretion in the Magnetically-Arrested State

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; McKinney, Jonathan; Reynolds, Christopher S.

    2016-01-01

    Shakura-Sunyaev thin disk theory is fundamental to black hole astrophysics. Though applications of the theory are wide-spread and powerful tools for explaining observations, such as Soltan's argument using quasar power, broadened iron line measurements, continuum fitting, and recently reverberation mapping, a significant large-scale magnetic field causes substantial deviations from standard thin disk behavior. We have used fully 3D general relativistic MHD simulations with cooling to explore the thin (H/R~0.1) magnetically arrested disk (MAD) state and quantify these deviations. This work demonstrates that accumulation of large-scale magnetic flux into the MAD state is possible, and then extends prior numerical studies of thicker disks, allowing us to measure how jet power scales with the disk state, providing a natural explanation of phenomena like jet quenching in the high-soft state of X-ray binaries. We have also simulated thin MAD disks with a misaligned black hole spin axis in order to understand further deviations from thin disk theory that may significantly affect observations.

  9. Ground state properties of 3d metals from self-consistent GW approach

    DOE PAGES

    Kutepov, Andrey L.

    2017-10-06

    The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less

  10. Ground state properties of 3d metals from self-consistent GW approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutepov, Andrey L.

    The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less

  11. A precision medicine approach for psychiatric disease based on repeated symptom scores.

    PubMed

    Fojo, Anthony T; Musliner, Katherine L; Zandi, Peter P; Zeger, Scott L

    2017-12-01

    For psychiatric diseases, rich information exists in the serial measurement of mental health symptom scores. We present a precision medicine framework for using the trajectories of multiple symptoms to make personalized predictions about future symptoms and related psychiatric events. Our approach fits a Bayesian hierarchical model that estimates a population-average trajectory for all symptoms and individual deviations from the average trajectory, then fits a second model that uses individual symptom trajectories to estimate the risk of experiencing an event. The fitted models are used to make clinically relevant predictions for new individuals. We demonstrate this approach on data from a study of antipsychotic therapy for schizophrenia, predicting future scores for positive, negative, and general symptoms, and the risk of treatment failure in 522 schizophrenic patients with observations over 8 weeks. While precision medicine has focused largely on genetic and molecular data, the complementary approach we present illustrates that innovative analytic methods for existing data can extend its reach more broadly. The systematic use of repeated measurements of psychiatric symptoms offers the promise of precision medicine in the field of mental health. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Composition Dependence of the Hydrostatic Pressure Coefficients of the Bandgap of ZnSe(1-x)Te(x) Alloys

    NASA Technical Reports Server (NTRS)

    Wu, J.; Yu, K. M.; Walukiewicz, W.; Shan, W.; Ager, J. W., III; Haller, E. E.; Miotkowski, I.; Ramdas, A. K.; Su, Ching-Hua

    2003-01-01

    Optical absorption experiments have been performed using diamond anvil cells to measure the hydrostatic pressure dependence of the fundamental bandgap of ZnSe(sub 1-xTe(sub x) alloys over the entire composition range. The first and second-order pressure coefficients are obtained as a function of composition. Starting from the ZnSe side, the magnitude of both coefficients increases slowly until x approx. 0.7, where the ambient-pressure bandgap reaches a minimum. For larger values of x the coefficients rapidly approach the values of ZnTe. The large deviations of the pressure coefficients from the linear interpolation between ZnSe and ZnTe are explained in terms of the band anticrossing model.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esterlis, I.; Nosarzewski, B.; Huang, E. W.

    The superconducting (SC) and charge-density-wave (CDW) susceptibilities of the two-dimensional Holstein model are computed using determinant quantum Monte Carlo, and compared with results computed using the Migdal-Eliashberg (ME) approach. We access temperatures as low as 25 times less than the Fermi energy, E F, which are still above the SC transition. We find that the SC susceptibility at low T agrees quantitatively with the ME theory up to a dimensionless electron-phonon coupling λ 0 ≈ 0.4 but deviates dramatically for larger λ 0. We find that for large λ 0 and small phonon frequency ω 0 << E F CDWmore » ordering is favored and the preferred CDW ordering vector is uncorrelated with any obvious feature of the Fermi surface.« less

  14. Jitter model and signal processing techniques for pulse width modulation optical recording

    NASA Technical Reports Server (NTRS)

    Liu, Max M.-K.

    1991-01-01

    A jitter model and signal processing techniques are discussed for data recovery in Pulse Width Modulation (PWM) optical recording. In PWM, information is stored through modulating sizes of sequential marks alternating in magnetic polarization or in material structure. Jitter, defined as the deviation from the original mark size in the time domain, will result in error detection if it is excessively large. A new approach is taken in data recovery by first using a high speed counter clock to convert time marks to amplitude marks, and signal processing techniques are used to minimize jitter according to the jitter model. The signal processing techniques include motor speed and intersymbol interference equalization, differential and additive detection, and differential and additive modulation.

  15. Cosmological implications of a large complete quasar sample

    PubMed Central

    Segal, I. E.; Nicoll, J. F.

    1998-01-01

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423–1460]. The Expanding Universe model as represented by the Friedman–Lemaitre cosmology with parameters qo = 0, Λ = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude–redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11σ from direct observation; none of the C2 predictions deviate by >2σ. The C1 deviations may be reconciled with theory by the hypothesis of quasar “evolution,” which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact. PMID:9560182

  16. SU-F-T-564: 3 Year Experience of Treatment Plan QualityAssurance for Vero SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Z; Li, Z; Mamalui, M

    2016-06-15

    Purpose: To verify treatment plan monitor units from iPlan treatment planning system for Vero Stereotactic Body Radiotherapy (SBRT) treatment using both software-based and (homogeneous and heterogeneous) phantom-based approaches. Methods: Dynamic conformal arcs (DCA) were used for SBRT treatment of oligometastasis patients using Vero linear accelerator. For each plan, Monte Carlo calculated treatment plans MU (prescribed dose to water with 1% variance) is verified first by RadCalc software with 3% difference threshold. Beyond 3% differences, treatment plans were copied onto (homogeneous) Scanditronix phantom for non-lung patients and copied onto (heterogeneous) CIRS phantom for lung patients and the corresponding plan dose wasmore » measured using a cc01 ion chamber. The difference between the planed and measured dose was recorded. For the past 3 years, we have treated 180 patients with 315 targets. Out of these patients, 99 targets treatment plan RadCalc calculation exceeded 3% threshold and phantom based measurements were performed with 26 plans using Scanditronix phantom and 73 plans using CIRS phantom. Mean and standard deviation of the dose differences were obtained and presented. Results: For all patient RadCalc calculations, the mean dose difference is 0.76% with a standard deviation of 5.97%. For non-lung patient plan Scanditronix phantom measurements, the mean dose difference is 0.54% with standard deviation of 2.53%; for lung patient plan CIRS phantom measurements, the mean dose difference is −0.04% with a standard deviation of 1.09%; The maximum dose difference is 3.47% for Scanditronix phantom measurements and 3.08% for CIRS phantom measurements. Conclusion: Limitations in secondary MU check software lead to perceived large dose discrepancies for some of the lung patient SBRT treatment plans. Homogeneous and heterogeneous phantoms were used in plan quality assurance for non-lung patients and lung patients, respectively. Phantom based QA showed the relative good agreement between iPlan calculated dose and measured dose.« less

  17. Seismic velocity deviation log: An effective method for evaluating spatial distribution of reservoir pore types

    NASA Astrophysics Data System (ADS)

    Shirmohamadi, Mohamad; Kadkhodaie, Ali; Rahimpour-Bonab, Hossain; Faraji, Mohammad Ali

    2017-04-01

    Velocity deviation log (VDL) is a synthetic log used to determine pore types in reservoir rocks based on a combination of the sonic log with neutron-density logs. The current study proposes a two step approach to create a map of porosity and pore types by integrating the results of petrographic studies, well logs and seismic data. In the first step, velocity deviation log was created from the combination of the sonic log with the neutron-density log. The results allowed identifying negative, zero and positive deviations based on the created synthetic velocity log. Negative velocity deviations (below - 500 m/s) indicate connected or interconnected pores and fractures, while positive deviations (above + 500 m/s) are related to isolated pores. Zero deviations in the range of [- 500 m/s, + 500 m/s] are in good agreement with intercrystalline and microporosities. The results of petrographic studies were used to validate the main pore type derived from velocity deviation log. In the next step, velocity deviation log was estimated from seismic data by using a probabilistic neural network model. For this purpose, the inverted acoustic impedance along with the amplitude based seismic attributes were formulated to VDL. The methodology is illustrated by performing a case study from the Hendijan oilfield, northwestern Persian Gulf. The results of this study show that integration of petrographic, well logs and seismic attributes is an instrumental way for understanding the spatial distribution of main reservoir pore types.

  18. Modeling of roll/pitch determination with horizon sensors - Oblate Earth

    NASA Astrophysics Data System (ADS)

    Hablani, Hari B.

    Model calculations are presented of roll/pitch determinations for oblate Earth, with horizon sensors. Two arrangements of a pair of horizon sensors are considered: left and right of the velocity vactor (i.e., along the pitch axis), and aft and forward (along the roll axis). Two approaches are used to obtain the roll/pitch oblateness corrections: (1) the crossing point approach, where the two crossings of the horizon sensor's scan and the earth's horizon are determined, and (2) by decomposing the angular deviation of the geocentric normal from the geodetic normal into roll and pitch components. It is shown that the two approaches yield essentially the same corrections if two sensors are used simultaneously. However, if the spacecraft is outfitted with only one sensor, the oblateness correction about one axis is far different from that predicted by the geocentric/geodetic angular deviation approach. In this case, the corrections may be calculated on ground for the sensor location under consideration and stored in the flight computer, using the crossing point approach.

  19. Analog track angle error displays improve simulated GPS approach performance

    DOT National Transportation Integrated Search

    1996-01-01

    Pilots flying non-precision instrument approaches traditionally rely on a course deviation indicator (CDI) analog display of cross track error (XTE) information. THe new generation of GPS based area navigation (RNAV) receivers can also compute accura...

  20. Rare events in networks with internal and external noise

    NASA Astrophysics Data System (ADS)

    Hindes, J.; Schwartz, I. B.

    2017-12-01

    We study rare events in networks with both internal and external noise, and develop a general formalism for analyzing rare events that combines pair-quenched techniques and large-deviation theory. The probability distribution, shape, and time scale of rare events are considered in detail for extinction in the Susceptible-Infected-Susceptible model as an illustration. We find that when both types of noise are present, there is a crossover region as the network size is increased, where the probability exponent for large deviations no longer increases linearly with the network size. We demonstrate that the form of the crossover depends on whether the endemic state is localized near the epidemic threshold or not.

  1. Estimating daily climatologies for climate indices derived from climate model data and observations

    PubMed Central

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  2. Diagnostic accuracy of referral criteria for head circumference to detect hydrocephalus in the first year of life.

    PubMed

    van Dommelen, Paula; Deurloo, Jacqueline A; Gooskens, Rob H; Verkerk, Paul H

    2015-04-01

    Increased head circumference is often the first and main sign leading to the diagnosis of hydrocephalus. Our aim is to investigate the diagnostic accuracy of referral criteria for head circumference to detect hydrocephalus in the first year of life. A reference group with longitudinal head circumference data (n = 1938) was obtained from the Social Medical Survey of Children Attending Child Health Clinics study. The case group comprised infants with hydrocephalus treated in a tertiary pediatric hospital who had not already been detected during pregnancy (n = 125). Head circumference data were available for 43 patients. Head circumference data were standardized according to gestational age-specific references. Sensitivity and specificity of a very large head circumference (>2.5 standard deviations on the growth chart) were, respectively, 72.1% (95% confidence interval [CI]: 56.3-84.7) and 97.1% (95% CI:96.2-97.8). These figures were, respectively, 74.4% (95% CI: 58.8-86.5) and 93.0% (95% CI:91.8-94.1) for a large head circumference (>2.0 standard deviation), and 76.7% (95% CI:61.4-88.2) and 96.5% (95% CI:95.6-97.3) for a very large head circumference and/or a very large (>2.5 standard deviation) progressive growth of head circumference. A very large head circumference and/or a very large progressive growth of head circumference shows the best diagnostic accuracy to detect hydrocephalus at an early stage. Gestational age-specific growth charts are recommended. Further improvements may be possible by taking into account parental head circumference. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. High-precision simulation of the height distribution for the KPZ equation

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Le Doussal, Pierre; Majumdar, Satya N.; Rosso, Alberto; Schehr, Gregory

    2018-03-01

    The one-point distribution of the height for the continuum Kardar-Parisi-Zhang (KPZ) equation is determined numerically using the mapping to the directed polymer in a random potential at high temperature. Using an importance sampling approach, the distribution is obtained over a large range of values, down to a probability density as small as 10-1000 in the tails. Both short and long times are investigated and compared with recent analytical predictions for the large-deviation forms of the probability of rare fluctuations. At short times the agreement with the analytical expression is spectacular. We observe that the far left and right tails, with exponents 5/2 and 3/2, respectively, are preserved also in the region of long times. We present some evidence for the predicted non-trivial crossover in the left tail from the 5/2 tail exponent to the cubic tail of the Tracy-Widom distribution, although the details of the full scaling form remain beyond reach.

  4. Large-angle slewing maneuvers for flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Chun, Hon M.; Turner, James D.

    1988-01-01

    A new class of closed-form solutions for finite-time linear-quadratic optimal control problems is presented. The solutions involve Potter's solution for the differential matrix Riccati equation, which assumes the form of a steady-state plus transient term. Illustrative examples are presented which show that the new solutions are more computationally efficient than alternative solutions based on the state transition matrix. As an application of the closed-form solutions, the neighboring extremal path problem is presented for a spacecraft retargeting maneuver where a perturbed plant with off-nominal boundary conditions now follows a neighboring optimal trajectory. The perturbation feedback approach is further applied to three-dimensional slewing maneuvers of large flexible spacecraft. For this problem, the nominal solution is the optimal three-dimensional rigid body slew. The perturbation feedback then limits the deviations from this nominal solution due to the flexible body effects. The use of frequency shaping in both the nominal and perturbation feedback formulations reduces the excitation of high-frequency unmodeled modes. A modified Kalman filter is presented for estimating the plant states.

  5. Methodenvergleich zur Bestimmung der hydraulischen Durchlässigkeit

    NASA Astrophysics Data System (ADS)

    Storz, Katharina; Steger, Hagen; Wagner, Valentin; Bayer, Peter; Blum, Philipp

    2017-06-01

    Knowing the hydraulic conductivity (K) is a precondition for understanding groundwater flow processes in the subsurface. Numerous laboratory and field methods for the determination of hydraulic conductivity exist, which can lead to significantly different results. In order to quantify the variability of these various methods, the hydraulic conductivity was examined for an industrial silica sand (Dorsilit) using four different methods: (1) grain-size analysis, (2) Kozeny-Carman approach, (3) permeameter tests and (4) flow rate experiments in large-scale tank experiments. Due to the large volume of the artificially built aquifer, the tank experiment results are assumed to be the most representative. Hydraulic conductivity values derived from permeameter tests show only minor deviation, while results of the empirically evaluated grain-size analysis are about one magnitude higher and show great variances. The latter was confirmed by the analysis of several methods for the determination of K-values found in the literature, thus we generally question the suitability of grain-size analyses and strongly recommend the use of permeameter tests.

  6. Constructing Ozone Profile Climatologies with Self-Organizing Maps: Illustrations with CONUS Ozonesonde Data

    NASA Astrophysics Data System (ADS)

    Thompson, A. M.; Stauffer, R. M.; Young, G. S.

    2015-12-01

    Ozone (O3) trends analysis is typically performed with monthly or seasonal averages. Although this approach works well for stratospheric or total O3, uncertainties in tropospheric O3 amounts may be large due to rapid meteorological changes near the tropopause and in the lower free troposphere (LFT) where pollution has a days-weeks lifetime. We use self-organizing maps (SOM), a clustering technique, as an alternative for creating tropospheric climatologies from O3 soundings. In a previous study of 900 tropical ozonesondes, clusters representing >40% of profiles deviated > 1-sigma from mean O­3. Here SOM are based on 15 years of data from four sites in the contiguous US (CONUS; Boulder, CO; Huntsville, AL; Trinidad Head, CA; Wallops Island, VA). Ozone profiles from 2 - 12 km are used to evaluate the impact of tropopause variability on climatology; 2 - 6 km O3 profile segments are used for the LFT. Near-tropopause O­3 is twice the mean O­3 mixing ratio in three clusters of 2 - 12 km O3, representing > 15% of profiles at each site. Large mid and lower-tropospheric O3 deviations from monthly means are found in clusters of both 2 - 12 and 2 - 6 km O3. Positive offsets result from pollution and stratosphere-to-troposphere exchange. In the LFT the lowest tropospheric O3 is associated with subtropical air. Some clusters include profiles with common seasonality but other factors, e.g., tropopause height or LFT column amount, characterize other SOM nodes. Thus, as for tropical profiles, CONUS O­3 averages can be a poor choice for a climatology.

  7. Operator product expansion in Liouville field theory and Seiberg-type transitions in log-correlated random energy models

    NASA Astrophysics Data System (ADS)

    Cao, Xiangyu; Le Doussal, Pierre; Rosso, Alberto; Santachiara, Raoul

    2018-04-01

    We study transitions in log-correlated random energy models (logREMs) that are related to the violation of a Seiberg bound in Liouville field theory (LFT): the binding transition and the termination point transition (a.k.a., pre-freezing). By means of LFT-logREM mapping, replica symmetry breaking and traveling-wave equation techniques, we unify both transitions in a two-parameter diagram, which describes the free-energy large deviations of logREMs with a deterministic background log potential, or equivalently, the joint moments of the free energy and Gibbs measure in logREMs without background potential. Under the LFT-logREM mapping, the transitions correspond to the competition of discrete and continuous terms in a four-point correlation function. Our results provide a statistical interpretation of a peculiar nonlocality of the operator product expansion in LFT. The results are rederived by a traveling-wave equation calculation, which shows that the features of LFT responsible for the transitions are reproduced in a simple model of diffusion with absorption. We examine also the problem by a replica symmetry breaking analysis. It complements the previous methods and reveals a rich large deviation structure of the free energy of logREMs with a deterministic background log potential. Many results are verified in the integrable circular logREM, by a replica-Coulomb gas integral approach. The related problem of common length (overlap) distribution is also considered. We provide a traveling-wave equation derivation of the LFT predictions announced in a precedent work.

  8. Passive PE Sampling in Support of In Situ Remediation of Contaminated Sediments

    DTIC Science & Technology

    2015-08-01

    control RPD relative percent difference RSD relative standard deviation SERDP Strategic Environmental Research and Development Program SOPs...sediments from 2 stations, each at 4 PCB spike levels, for four individual congeners was 22 ± 6 % relative standard deviation ( RSD ). Also, comparison of... RSD (Table 3). However, larger congeners (e.g., congeners #153 and 180) whose approach to equilibrium is less certain, based on small fractions of

  9. 75 FR 49491 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-13

    ... costs or any other approach that deviates from the incentive-based (or projected-`cost') approach... providing VRS-- should, in theory, apply equally to reliance on projected cost data in VRS rate setting...

  10. Generalized Cahn-Hilliard equation for solutions with drastically different diffusion coefficients. Application to exsolution in ternary feldspar

    NASA Astrophysics Data System (ADS)

    Petrishcheva, E.; Abart, R.

    2012-04-01

    We address mathematical modeling and computer simulations of phase decomposition in a multicomponent system. As opposed to binary alloys with one common diffusion parameter, our main concern is phase decomposition in real geological systems under influence of strongly different interdiffusion coefficients, as it is frequently encountered in mineral solid solutions with coupled diffusion on different sub-lattices. Our goal is to explain deviations from equilibrium element partitioning which are often observed in nature, e.g., in a cooled ternary feldspar. To this end we first adopt the standard Cahn-Hilliard model to the multicomponent diffusion problem and account for arbitrary diffusion coefficients. This is done by using Onsager's approach such that flux of each component results from the combined action of chemical potentials of all components. In a second step the generalized Cahn-Hilliard equation is solved numerically using finite-elements approach. We introduce and investigate several decomposition scenarios that may produce systematic deviations from the equilibrium element partitioning. Both ideal solutions and ternary feldspar are considered. Typically, the slowest component is initially "frozen" and the decomposition effectively takes place only for two "fast" components. At this stage the deviations from the equilibrium element partitioning are indeed observed. These deviations may became "frozen" under conditions of cooling. The final equilibration of the system occurs on a considerably slower time scale. Therefore the system may indeed remain unaccomplished at the observation point. Our approach reveals the intrinsic reasons for the specific phase separation path and rigorously describes it by direct numerical solution of the generalized Cahn-Hilliard equation.

  11. Parity-time symmetry breaking in magnetic systems

    DOE PAGES

    Galda, Alexey; Vinokur, Valerii M.

    2016-07-14

    The understanding of out-of-equilibrium physics, especially dynamic instabilities and dynamic phase transitions, is one of the major challenges of contemporary science, spanning the broadest wealth of research areas that range from quantum optics to living organisms. By focusing on nonequilibrium dynamics of an open dissipative spin system, we introduce a non-Hermitian Hamiltonian approach, in which non-Hermiticity reflects dissipation and deviation from equilibrium. The imaginary part of the proposed spin Hamiltonian describes the effects of Gilbert damping and applied Slonczewski spin-transfer torque. In the classical limit, our approach reproduces Landau-Lifshitz-Gilbert-Slonczewski dynamics of a large macrospin. Here, we reveal the spin-transfer torque-drivenmore » parity-time symmetry-breaking phase transition corresponding to a transition from precessional to exponentially damped spin dynamics. Micromagnetic simulations for nanoscale ferromagnetic disks demonstrate the predicted effect. These findings can pave the way to a general quantitative description of out-of-equilibrium phase transitions driven by spontaneous parity-time symmetry breaking.« less

  12. Evaluation of an index of biotic integrity approach used to assess biological condition in western U.S. streams and rivers at varying spatial scales

    USGS Publications Warehouse

    Meador, M.R.; Whittier, T.R.; Goldstein, R.M.; Hughes, R.M.; Peck, D.V.

    2008-01-01

    Consistent assessments of biological condition are needed across multiple ecoregions to provide a greater understanding of the spatial extent of environmental degradation. However, consistent assessments at large geographic scales are often hampered by lack of uniformity in data collection, analyses, and interpretation. The index of biotic integrity (IBI) has been widely used in eastern and central North America, where fish assemblages are complex and largely composed of native species, but IBI development has been hindered in the western United States because of relatively low fish species richness and greater relative abundance of alien fishes. Approaches to developing IBIs rarely provide a consistent means of assessing biological condition across multiple ecoregions. We conducted an evaluation of IBIs recently proposed for three ecoregions of the western United States using an independent data set covering a large geographic scale. We standardized the regional IBIs and developed biological condition criteria, assessed the responsiveness of IBIs to basin-level land uses, and assessed their precision and concordance with basin-scale IBIs. Standardized IBI scores from 318 sites in the western United States comprising mountain, plains, and xeric ecoregions were significantly related to combined urban and agricultural land uses. Standard deviations and coefficients of variation revealed relatively low variation in IBI scores based on multiple sampling reaches at sites. A relatively high degree of corroboration with independent, locally developed IBIs indicates that the regional IBIs are robust across large geographic scales, providing precise and accurate assessments of biological condition for western U.S. streams. ?? Copyright by the American Fisheries Society 2008.

  13. In-Situ monitoring and modeling of metal additive manufacturing powder bed fusion

    NASA Astrophysics Data System (ADS)

    Alldredge, Jacob; Slotwinski, John; Storck, Steven; Kim, Sam; Goldberg, Arnold; Montalbano, Timothy

    2018-04-01

    One of the major challenges in metal additive manufacturing is developing in-situ sensing and feedback control capabilities to eliminate build errors and allow qualified part creation without the need for costly and destructive external testing. Previously, many groups have focused on high fidelity numerical modeling and true temperature thermal imaging systems. These approaches require large computational resources or costly hardware that requires complex calibration and are difficult to integrate into commercial systems. In addition, due to the rapid change in the state of the material as well as its surface properties, getting true temperature is complicated and difficult. Here, we describe a different approach where we implement a low cost thermal imaging solution allowing for relative temperature measurements sufficient for detecting unwanted process variability. We match this with a faster than real time qualitative model that allows the process to be rapidly modeled during the build. The hope is to combine these two, allowing for the detection of anomalies in real time, enabling corrective action to potentially be taken, or parts to be stopped immediately after the error, saving material and time. Here we describe our sensor setup, its costs and abilities. We also show the ability to detect in real time unwanted process deviations. We also show that the output of our high speed model agrees qualitatively with experimental results. These results lay the groundwork for our vision of an integrated feedback and control scheme that combines low cost, easy to use sensors and fast modeling for process deviation monitoring.

  14. Particle Orbit Analysis in the Finite Beta Plasma of the Large Helical Device using Real Coordinates

    NASA Astrophysics Data System (ADS)

    Seki, Ryousuke; Matsumoto, Yutaka; Suzuki, Yasuhiro; Watanabe, Kiyomasa; Itagaki, Masafumi

    High-energy particles in a finite beta plasma of the Large Helical Device (LHD) are numerically traced in a real coordinate system. We investigate particle orbits by changing the beta value and/or the magnetic field strength. No significant difference is found in the particle orbit classifications between the vacuum magnetic field and the finite beta plasma cases. The deviation of a banana orbit from the flux surfaces strongly depends on the beta value, although the deviation of the orbit of a passing particle is independent of the beta value. In addition, the deviation of the orbit of the passing particle, rather than that of the banana-orbit particles, depends on the magnetic field strength. We also examine the effect of re-entering particles, which repeatedly pass in and out of the last closed flux surface, in the finite beta plasma of the LHD. It is found that the number of re-entering particles in the finite beta plasma is larger than that in the vacuum magnetic field. As a result, the role of reentering particles in the finite beta plasma of the LHD is more important than that in the vacuum magnetic field, and the effect of the charge-exchange reaction on particle confinement in the finite beta plasma is large.

  15. Not a Copernican observer: biased peculiar velocity statistics in the local Universe

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.; Nusser, Adi; Feix, Martin; Bilicki, Maciej

    2017-05-01

    We assess the effect of the local large-scale structure on the estimation of two-point statistics of the observed radial peculiar velocities of galaxies. A large N-body simulation is used to examine these statistics from the perspective of random observers as well as 'Local Group-like' observers conditioned to reside in an environment resembling the observed Universe within 20 Mpc. The local environment systematically distorts the shape and amplitude of velocity statistics with respect to ensemble-averaged measurements made by a Copernican (random) observer. The Virgo cluster has the most significant impact, introducing large systematic deviations in all the statistics. For a simple 'top-hat' selection function, an idealized survey extending to ˜160 h-1 Mpc or deeper is needed to completely mitigate the effects of the local environment. Using shallower catalogues leads to systematic deviations of the order of 50-200 per cent depending on the scale considered. For a flat redshift distribution similar to the one of the CosmicFlows-3 survey, the deviations are even more prominent in both the shape and amplitude at all separations considered (≲100 h-1 Mpc). Conclusions based on statistics calculated without taking into account the impact of the local environment should be revisited.

  16. Large incidence angle and defocus influence cat's eye retro-reflector

    NASA Astrophysics Data System (ADS)

    Zhang, Lai-xian; Sun, Hua-yan; Zhao, Yan-zhong; Yang, Ji-guang; Zheng, Yong-hui

    2014-11-01

    Cat's eye lens make the laser beam retro-reflected exactly to the opposite direction of the incidence beam, called cat's eye effect, which makes rapid acquiring, tracking and pointing of free space optical communication possible. Study the influence of cat's eye effect to cat's eye retro-reflector at large incidence angle is useful. This paper analyzed the process of how the incidence angle and focal shit affect effective receiving area, retro-reflected beam divergence angle, central deviation of cat's eye retro-reflector at large incidence angle and cat's eye effect factor using geometrical optics method, and presented the analytic expressions. Finally, numerical simulation was done to prove the correction of the study. The result shows that the efficiency receiving area of cat's eye retro-reflector is mainly affected by incidence angle when the focal shift is positive, and it decreases rapidly when the incidence angle increases; the retro-reflected beam divergence and central deviation is mainly affected by focal shift, and within the effective receiving area, the central deviation is smaller than beam divergence in most time, which means the incidence beam can be received and retro-reflected to the other terminal in most time. The cat's eye effect factor gain is affected by both incidence angle and focal shift.

  17. Analysis of change orders in geotechnical engineering work at INDOT.

    DOT National Transportation Integrated Search

    2011-01-01

    Change orders represent a cost to the State and to tax payers that is real and often extremely large because contractors tend to charge very large : amounts to any additional work that deviates from the work that was originally planned. Therefore, ef...

  18. Method of surface error visualization using laser 3D projection technology

    NASA Astrophysics Data System (ADS)

    Guo, Lili; Li, Lijuan; Lin, Xuezhu

    2017-10-01

    In the process of manufacturing large components, such as aerospace, automobile and shipping industry, some important mold or stamped metal plate requires precise forming on the surface, which usually needs to be verified, if necessary, the surface needs to be corrected and reprocessed. In order to make the correction of the machined surface more convenient, this paper proposes a method based on Laser 3D projection system, this method uses the contour form of terrain contour, directly showing the deviation between the actually measured data and the theoretical mathematical model (CAD) on the measured surface. First, measure the machined surface to get the point cloud data and the formation of triangular mesh; secondly, through coordinate transformation, unify the point cloud data to the theoretical model and calculate the three-dimensional deviation, according to the sign (positive or negative) and size of the deviation, use the color deviation band to denote the deviation of three-dimensional; then, use three-dimensional contour lines to draw and represent every coordinates deviation band, creating the projection files; finally, import the projection files into the laser projector, and make the contour line projected to the processed file with 1:1 in the form of a laser beam, compare the Full-color 3D deviation map with the projection graph, then, locate and make quantitative correction to meet the processing precision requirements. It can display the trend of the machined surface deviation clearly.

  19. In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.

    PubMed

    Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J

    2004-01-01

    In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.

  20. Visual space under free viewing conditions.

    PubMed

    Doumen, Michelle J A; Kappers, Astrid M L; Koenderink, Jan J

    2005-10-01

    Most research on visual space has been done under restricted viewing conditions and in reduced environments. In our experiments, observers performed an exocentric pointing task, a collinearity task, and a parallelity task in a entirely visible room. We varied the relative distances between the objects and the observer and the separation angle between the two objects. We were able to compare our data directly with data from experiments in an environment with less monocular depth information present. We expected that in a richer environment and under less restrictive viewing conditions, the settings would deviate less from the veridical settings. However, large systematic deviations from veridical settings were found for all three tasks. The structure of these deviations was task dependent, and the structure and the deviations themselves were comparable to those obtained under more restricted circumstances. Thus, the additional information was not used effectively by the observers.

  1. Complexity analysis based on generalized deviation for financial markets

    NASA Astrophysics Data System (ADS)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  2. Truncated Linear Statistics Associated with the Eigenvalues of Random Matrices II. Partial Sums over Proper Time Delays for Chaotic Quantum Dots

    NASA Astrophysics Data System (ADS)

    Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe

    2017-06-01

    Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p

  3. Approach to a manufacture-oriented modeling of bent tubes depending on the curvature distribution during three-roll-push-bending

    NASA Astrophysics Data System (ADS)

    Groth, Sebastian; Engel, Bernd; Frohn, Peter

    2018-05-01

    Kinematic bending processes such as three-roll-push-bending are used to manufacture freeform bent part systems. Due to the kinematic shaping, the bent parts have a characteristic infeed and outfeed area in the transition zone from the straight section into the curved area. These transition zones are currently not considered in the design process, which results in a geometric shape deviation between the CAD model and the bent part. Within this publication, a sensitivity analysis examines the influence of different parameters on the transition zone and the shape deviation. In addition, an approach is presented, which allows a manufacture-oriented modeling of the bending geometry.

  4. Influence of asymmetrical drawing radius deviation in micro deep drawing

    NASA Astrophysics Data System (ADS)

    Heinrich, L.; Kobayashi, H.; Shimizu, T.; Yang, M.; Vollertsen, F.

    2017-09-01

    Nowadays, an increasing demand for small metal parts in electronic and automotive industries can be observed. Deep drawing is a well-suited technology for the production of such parts due to its excellent qualities for mass production. However, the downscaling of the forming process leads to new challenges in tooling and process design, such as high relative deviation of tool geometry or blank displacement compared to the macro scale. FEM simulation has been a widely-used tool to investigate the influence of symmetrical process deviations as for instance a global variance of the drawing radius. This study shows a different approach that allows to determine the impact of asymmetrical process deviations on micro deep drawing. In this particular case the impact of an asymmetrical drawing radius deviation and blank displacement on cup geometry deviation was investigated for different drawing ratios by experiments and FEM simulation. It was found that both variations result in an increasing cup height deviation. Nevertheless, with increasing drawing ratio a constant drawing radius deviation has an increasing impact, while blank displacement results in a decreasing offset of the cups geometry. This is explained by different mechanisms that result in an uneven cup geometry. While blank displacement leads to material surplus on one side of the cup, an unsymmetrical radius deviation on the other hand generates uneven stretching of the cups wall. This is intensified for higher drawing ratios. It can be concluded that the effect of uneven radius geometry proves to be of major importance for the production of accurately shaped micro cups and cannot be compensated by intentional blank displacement.

  5. Rapidly rotating neutron stars with a massive scalar field—structure and universal relations

    NASA Astrophysics Data System (ADS)

    Doneva, Daniela D.; Yazadjiev, Stoytcho S.

    2016-11-01

    We construct rapidly rotating neutron star models in scalar-tensor theories with a massive scalar field. The fact that the scalar field has nonzero mass leads to very interesting results since the allowed range of values of the coupling parameters is significantly broadened. Deviations from pure general relativity can be very large for values of the parameters that are in agreement with the observations. We found that the rapid rotation can magnify the differences several times compared to the static case. The universal relations between the normalized moment of inertia and quadrupole moment are also investigated both for the slowly and rapidly rotating cases. The results show that these relations are still EOS independent up to a large extend and the deviations from pure general relativity can be large. This places the massive scalar-tensor theories amongst the few alternative theories of gravity that can be tested via the universal I-Love-Q relations.

  6. Correction of caudal deflections of the nasal septum with a modified Goldman septoplasty technique: how we do it.

    PubMed

    Lawson, William; Westreich, Richard

    2007-10-01

    Correcting deviations of the caudal septum can be challenging because of cartilage memory, the need to provide adequate nasal tip and dorsal septal support, and the longterm effects of healing. The authors describe a minimally invasive, endonasal approach to the correction of caudal septal deviations. The procedure involves a hemitransfixion incision, unilateral flap elevation, and cartilage repositioning by limited dissection and excision.

  7. Remote measurement of cloud microphysics and its influence in predicting high impact weather events

    NASA Astrophysics Data System (ADS)

    Bipasha, Paul S.; Jinya, John

    2016-05-01

    Understanding the cloud microphysical processes and precise retrieval of parameters governing the same are crucial for weather and climate prediction. Advanced remote sensing sensors and techniques offer an opportunity for monitoring micro-level developments in cloud structure. . Using the observations from a visible and near-infrared lidar onboard CALIPSO satellite (part of A-train) , the spatial variation of cloud structure has been studied over the Tropical monsoon region . It is found that there is large variability in the cloud microphysical parameters manifesting in distinct precipitation regimes. In particular, the severe storms over this region are driven by processes which range from the synoptic to the microphysical scale. Using INSAT-3D data, retrieval of cloud microphysical parameters like effective radius (CER) and optical depth (COD) were carried out for tropical cyclone Phailine. It was observed that there is a general increase of CER in a top-down direction, characterizing the progressively increasing number and size of precipitation hydrometeors while approaching the cloud base. The distribution of CER relative to cloud top temperature for growing convective clouds has been investigated to reveal the evolution of the particles composing the clouds. It is seen that the relatively high concentration of large particles in the downdraft zone is closely related to the precipitation efficiency of the system. Similar study was also carried using MODIS observations for cyclones over Indian Ocean (2010-2013), in which we find that that the mean effective radius is 24 microns with standard deviation 4.56, mean optical depth is 21 with standard deviation 13.98, mean cloud fraction is 0.92 with standard deviation 0.13 and mainly ice phase is dominant. Thus the remote observations of microstructure of convective storms provide very crucial information about the maintenance and potential devastation likely to be associated with it. With the synergistic observations from A-Train , geostationary and futuristic imaging spectroscopic sensors, a multi-dimensional, and multi-scalar exploration of cloud systems is anticipated leading to accurate prediction of high impact weather events.

  8. A DMAIC approach for process capability improvement an engine crankshaft manufacturing process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, P. Srinivasa

    2014-05-01

    The define-measure-analyze-improve-control (DMAIC) approach is a five-strata approach, namely DMAIC. This approach is the scientific approach for reducing the deviations and improving the capability levels of the manufacturing processes. The present work elaborates on DMAIC approach applied in reducing the process variations of the stub-end-hole boring operation of the manufacture of crankshaft. This statistical process control study starts with selection of the critical-to-quality (CTQ) characteristic in the define stratum. The next stratum constitutes the collection of dimensional measurement data of the CTQ characteristic identified. This is followed by the analysis and improvement strata where the various quality control tools like Ishikawa diagram, physical mechanism analysis, failure modes effects analysis and analysis of variance are applied. Finally, the process monitoring charts are deployed at the workplace for regular monitoring and control of the concerned CTQ characteristic. By adopting DMAIC approach, standard deviation is reduced from 0.003 to 0.002. The process potential capability index ( C P) values improved from 1.29 to 2.02 and the process performance capability index ( C PK) values improved from 0.32 to 1.45, respectively.

  9. A WRF simulation of the impact of 3-D radiative transfer on surface hydrology over the Rocky Mountains and Sierra Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liou, K. N.; Gu, Y.; Leung, L. R.

    2013-01-01

    We investigate 3-D mountains/snow effects on solar flux distributions and their impact on surface hydrology over the western United States, specifically the Rocky Mountains and Sierra Nevada. The Weather Research and Forecasting (WRF) model, applied at a 30 km grid resolution, is used in conjunction with a 3-D radiative transfer parameterization covering a time period from 1 November 2007 to 31 May 2008, during which abundant snowfall occurred. A comparison of the 3-D WRF simulation with the observed snow water equivalent (SWE) and precipitation from Snowpack Telemetry (SNOTEL) sites shows reasonable agreement in terms of spatial patterns and daily andmore » seasonal variability, although the simulation generally has a positive precipitation bias. We show that 3-D mountain features have a profound impact on the diurnal and monthly variation of surface radiative and heat fluxes, and on the consequent elevation-dependence of snowmelt and precipitation distributions. In particular, during the winter months, large deviations (3-D-PP, in which PP denotes the plane-parallel approach) of the monthly mean surface solar flux are found in the morning and afternoon hours due to shading effects for elevations below 2.5 km. During spring, positive deviations shift to the earlier morning. Over mountaintops higher than 3 km, positive deviations are found throughout the day, with the largest values of 40–60 W m -2 occurring at noon during the snowmelt season of April to May. The monthly SWE deviations averaged over the entire domain show an increase in lower elevations due to reduced snowmelt, which leads to a reduction in cumulative runoff. Over higher elevation areas, positive SWE deviations are found because of increased solar radiation available at the surface. Overall, this study shows that deviations of SWE due to 3-D radiation effects range from an increase of 18% at the lowest elevation range (1.5–2 km) to a decrease of 8% at the highest elevation range (above 3 km). Since lower elevation areas occupy larger fractions of the land surface, the net effect of 3-D radiative transfer is to extend snowmelt and snowmelt-driven runoff into the warm season.Finally, because 60–90% of water resources originate from mountains worldwide, the aforementioned differences in simulated hydrology due solely to 3-D interactions between solar radiation and mountains/snow merit further investigation in order to understand the implications of modeling mountain water resources, and these resources' vulnerability to climate change and air pollution.« less

  10. Data-driven modeling reveals cell behaviors controlling self-organization during Myxococcus xanthus development

    PubMed Central

    Cotter, Christopher R.; Schüttler, Heinz-Bernd; Igoshin, Oleg A.; Shimkets, Lawrence J.

    2017-01-01

    Collective cell movement is critical to the emergent properties of many multicellular systems, including microbial self-organization in biofilms, embryogenesis, wound healing, and cancer metastasis. However, even the best-studied systems lack a complete picture of how diverse physical and chemical cues act upon individual cells to ensure coordinated multicellular behavior. Known for its social developmental cycle, the bacterium Myxococcus xanthus uses coordinated movement to generate three-dimensional aggregates called fruiting bodies. Despite extensive progress in identifying genes controlling fruiting body development, cell behaviors and cell–cell communication mechanisms that mediate aggregation are largely unknown. We developed an approach to examine emergent behaviors that couples fluorescent cell tracking with data-driven models. A unique feature of this approach is the ability to identify cell behaviors affecting the observed aggregation dynamics without full knowledge of the underlying biological mechanisms. The fluorescent cell tracking revealed large deviations in the behavior of individual cells. Our modeling method indicated that decreased cell motility inside the aggregates, a biased walk toward aggregate centroids, and alignment among neighboring cells in a radial direction to the nearest aggregate are behaviors that enhance aggregation dynamics. Our modeling method also revealed that aggregation is generally robust to perturbations in these behaviors and identified possible compensatory mechanisms. The resulting approach of directly combining behavior quantification with data-driven simulations can be applied to more complex systems of collective cell movement without prior knowledge of the cellular machinery and behavioral cues. PMID:28533367

  11. Statistical Techniques For Real-time Anomaly Detection Using Spark Over Multi-source VMware Performance Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solaimani, Mohiuddin; Iftekhar, Mohammed; Khan, Latifur

    Anomaly detection refers to the identi cation of an irregular or unusual pat- tern which deviates from what is standard, normal, or expected. Such deviated patterns typically correspond to samples of interest and are assigned different labels in different domains, such as outliers, anomalies, exceptions, or malware. Detecting anomalies in fast, voluminous streams of data is a formidable chal- lenge. This paper presents a novel, generic, real-time distributed anomaly detection framework for heterogeneous streaming data where anomalies appear as a group. We have developed a distributed statistical approach to build a model and later use it to detect anomaly. Asmore » a case study, we investigate group anomaly de- tection for a VMware-based cloud data center, which maintains a large number of virtual machines (VMs). We have built our framework using Apache Spark to get higher throughput and lower data processing time on streaming data. We have developed a window-based statistical anomaly detection technique to detect anomalies that appear sporadically. We then relaxed this constraint with higher accuracy by implementing a cluster-based technique to detect sporadic and continuous anomalies. We conclude that our cluster-based technique out- performs other statistical techniques with higher accuracy and lower processing time.« less

  12. Neutral biogeography and the evolution of climatic niches.

    PubMed

    Boucher, Florian C; Thuiller, Wilfried; Davies, T Jonathan; Lavergne, Sébastien

    2014-05-01

    Recent debate on whether climatic niches are conserved through time has focused on how phylogenetic niche conservatism can be measured by deviations from a Brownian motion model of evolutionary change. However, there has been no evaluation of this methodological approach. In particular, the fact that climatic niches are usually obtained from distribution data and are thus heavily influenced by biogeographic factors has largely been overlooked. Our main objective here was to test whether patterns of climatic niche evolution that are frequently observed might arise from neutral dynamics rather than from adaptive scenarios. We developed a model inspired by neutral biodiversity theory, where individuals disperse, compete, and undergo speciation independently of climate. We then sampled the climatic niches of species according to their geographic position and showed that even when species evolve independently of climate, their niches can nonetheless exhibit evolutionary patterns strongly differing from Brownian motion. Indeed, climatic niche evolution is better captured by a model of punctuated evolution with constraints due to landscape boundaries, two features that are traditionally interpreted as evidence for selective processes acting on the niche. We therefore suggest that deviation from Brownian motion alone should not be used as evidence for phylogenetic niche conservatism but that information on phenotypic traits directly linked to physiology is required to demonstrate that climatic niches have been conserved through time.

  13. Who's biased? A meta-analysis of buyer-seller differences in the pricing of lotteries.

    PubMed

    Yechiam, Eldad; Ashby, Nathaniel J S; Pachur, Thorsten

    2017-05-01

    A large body of empirical research has examined the impact of trading perspective on pricing of consumer products, with the typical finding being that selling prices exceed buying prices (i.e., the endowment effect). Using a meta-analytic approach, we examine to what extent the endowment effect also emerges in the pricing of monetary lotteries. As monetary lotteries have a clearly defined normative value, we also assess whether one trading perspective is more biased than the other. We consider several indicators of bias: absolute deviation from expected values, rank correlation with expected values, overall variance, and per-unit variance. The meta-analysis, which includes 35 articles, indicates that selling prices considerably exceed buying prices (Cohen's d = 0.58). Importantly, we also find that selling prices deviate less from the lotteries' expected values than buying prices, both in absolute and in relative terms. Selling prices also exhibit lower variance per unit. Hierarchical Bayesian modeling with cumulative prospect theory indicates that buyers have lower probability sensitivity and a more pronounced response bias. The finding that selling prices are more in line with normative standards than buying prices challenges the prominent account whereby sellers' valuations are upward biased due to loss aversion, and supports alternative theoretical accounts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Neutral biogeography and the evolution of climatic niches

    PubMed Central

    Boucher, Florian C.; Thuiller, Wilfried; Davies, T. Jonathan; Lavergne, Sébastien

    2014-01-01

    Recent debate on whether climatic niches are conserved through time has focused on how phylogenetic niche conservatism can be measured by deviations from a Brownian motion model of evolutionary change. However, there has been no evaluation of this methodological approach. In particular, the fact that climatic niches are usually obtained from distribution data and are thus heavily influenced by biogeographic factors has largely been overlooked. Our main objective here was to test whether patterns of climatic niche evolution that are frequently observed might arise from neutral dynamics rather than adaptive scenarios. We develop a model inspired by Neutral Biodiversity Theory, where individuals disperse, compete, and undergo speciation independently of climate. We then sample the climatic niches of species according to their geographic position and show that even when species evolved independently of climate, their niches can nonetheless exhibit evolutionary patterns strongly differing from Brownian motion. Indeed, climatic niche evolution is better captured by a model of punctuated evolution with constraints due to landscape boundaries, two features that are traditionally interpreted as evidence for selective processes acting on the niche. We therefore suggest that deviation from Brownian motion alone should not be used as evidence for phylogenetic niche conservatism, but that information on phenotypic traits directly linked to physiology is required to demonstrate that climatic niches have been conserved through time. PMID:24739191

  15. Nonlinear propagation of ion-acoustic waves in electron-positron-ion plasma with trapped electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alinejad, H.; Sobhanian, S.; Mahmoodi, J.

    2006-01-15

    A theoretical investigation has been made for ion-acoustic waves in an unmagnetized electron-positron-ion plasma. A more realistic situation in which plasma consists of a negatively charged ion fluid, free positrons, and trapped as well as free electrons is considered. The properties of stationary structures are studied by the reductive perturbation method, which is valid for small but finite amplitude limit, and by pseudopotential approach, which is valid for large amplitude. With an appropriate modified form of the electron number density, two new equations for the ion dynamics have been found. When deviations from isothermality are finite, the modified Korteweg-deVries equationmore » has been found, and for the case that deviations from isothermality are small, calculations lead to a generalized Korteweg-deVries equation. It is shown from both weakly and highly nonlinear analysis that the presence of the positrons may allow solitary waves to exist. It is found that the effect of the positron density changes the maximum value of the amplitude and M (Mach number) for which solitary waves can exist. The present theory is applicable to analyze arbitrary amplitude ion-acoustic waves associated with positrons which may occur in space plasma.« less

  16. Finite-Size Scaling of a First-Order Dynamical Phase Transition: Adaptive Population Dynamics and an Effective Model

    NASA Astrophysics Data System (ADS)

    Nemoto, Takahiro; Jack, Robert L.; Lecomte, Vivien

    2017-03-01

    We analyze large deviations of the time-averaged activity in the one-dimensional Fredrickson-Andersen model, both numerically and analytically. The model exhibits a dynamical phase transition, which appears as a singularity in the large deviation function. We analyze the finite-size scaling of this phase transition numerically, by generalizing an existing cloning algorithm to include a multicanonical feedback control: this significantly improves the computational efficiency. Motivated by these numerical results, we formulate an effective theory for the model in the vicinity of the phase transition, which accounts quantitatively for the observed behavior. We discuss potential applications of the numerical method and the effective theory in a range of more general contexts.

  17. On Deviations between Observed and Theoretically Estimated Values on Additivity-Law Failures

    NASA Astrophysics Data System (ADS)

    Nayatani, Yoshinobu; Sobagaki, Hiroaki

    The authors have reported in the previous studies that the average observed results are about a half of the corresponding predictions on the experiments with large additivity-law failures. One of the reasons of the deviations is studied and clarified by using the original observed data on additivity-law failures in the Nakano experiment. The conclusion from the observations and their analyses clarified that it was essentially difficult to have a good agreement between the average observed results and the corresponding theoretical predictions in the experiments with large additivity-law failures. This is caused by a kind of unavoidable psychological pressure existing in subjects participated in the experiments. We should be satisfied with the agreement in trend between them.

  18. Exact Large-Deviation Statistics for a Nonequilibrium Quantum Spin Chain

    NASA Astrophysics Data System (ADS)

    Žnidarič, Marko

    2014-01-01

    We consider a one-dimensional XX spin chain in a nonequilibrium setting with a Lindblad-type boundary driving. By calculating large-deviation rate function in the thermodynamic limit, a generalization of free energy to a nonequilibrium setting, we obtain a complete distribution of current, including closed expressions for lower-order cumulants. We also identify two phase-transition-like behaviors in either the thermodynamic limit, at which the current probability distribution becomes discontinuous, or at maximal driving, when the range of possible current values changes discontinuously. In the thermodynamic limit the current has a finite upper and lower bound. We also explicitly confirm nonequilibrium fluctuation relation and show that the current distribution is the same under mapping of the coupling strength Γ→1/Γ.

  19. Estimation of soil organic partition coefficients: from retention factors measured by soil column chromatography with water as eluent.

    PubMed

    Xu, Feng; Liang, Xinmiao; Lin, Bingcheng; Schramm, Karl-Werner; Kettrup, Antonius

    2002-08-30

    The retention factors (k) of 104 hydrophobic organic chemicals (HOCs) were measured in soil column chromatography (SCC) over columns filled with three naturally occurring reference soils and eluted with Milli-Q water. A novel method for the estimation of soil organic partition coefficient (Koc) was developed based on correlations with k in soil/water systems. Strong log Koc versus log k correlations (r>0.96) were found. The estimated Koc values were in accordance with the literature values with a maximum deviation of less than 0.4 log units. All estimated Koc values from three soils were consistent with each other. The SCC approach is promising for fast screening of a large number of chemicals in their environmental applications.

  20. Research of vibration control based on current mode piezoelectric shunt damping circuit

    NASA Astrophysics Data System (ADS)

    Liu, Weiwei; Mao, Qibo

    2017-12-01

    The piezoelectric shunt damping circuit using current mode approach is imposed to control the vibration of a cantilever beam. Firstly, the simulated inductance with large values are designed for the corresponding RL series shunt circuits. Moreover, with an example of cantilever beam, the second natural frequency of the beam is targeted to control for experiment. By adjusting the values of the equivalent inductance and equivalent resistance of the shunt circuit, the optimal damping of the shunt circuit is obtained. Meanwhile, the designed piezoelectric shunt damping circuit stability is experimental verified. Experimental results show that the proposed piezoelectric shunt damping circuit based on current mode circuit has good vibration control performance. However, the control performance will be reduced if equivalent inductance and equivalent resistance values deviate from optimal values.

  1. Atomic rate coefficients in a degenerate plasma

    NASA Astrophysics Data System (ADS)

    Aslanyan, Valentin; Tallents, Greg

    2015-11-01

    The electrons in a dense, degenerate plasma follow Fermi-Dirac statistics, which deviate significantly in this regime from the usual Maxwell-Boltzmann approach used by many models. We present methods to calculate the atomic rate coefficients for the Fermi-Dirac distribution and present a comparison of the ionization fraction of carbon calculated using both models. We have found that for densities close to solid, although the discrepancy is small for LTE conditions, there is a large divergence from the ionization fraction by using classical rate coefficients in the presence of strong photoionizing radiation. We have found that using these modified rates and the degenerate heat capacity may affect the time evolution of a plasma subject to extreme ultraviolet and x-ray radiation such as produced in free electron laser irradiation of solid targets.

  2. Search for a new resonance decaying to a W or Z boson and a Higgs boson in the [Formula: see text] final states with the ATLAS detector.

    PubMed

    Aad, G; Abbott, B; Abdallah, J; Abdinov, O; Aben, R; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Affolder, A A; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimoto, G; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Alkire, S P; Allbrooke, B M M; Allport, P P; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Piqueras, D Álvarez; Alviggi, M G; Amadio, B T; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anders, J K; Anderson, K J; Andreazza, A; Andrei, V; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Arabidze, G; Arai, Y; Araque, J P; Arce, A T H; Arduh, F A; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Axen, B; Ayoub, M K; Azuelos, G; Baak, M A; Baas, A E; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Balek, P; Balestri, T; Balli, F; Banas, E; Banerjee, Sw; Bannoura, A A E; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnes, S L; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bassalat, A; Basye, A; Bates, R L; Batista, S J; Batley, J R; Battaglia, M; Bauce, M; Bauer, F; Bawa, H S; Beacham, J B; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, M; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bender, M; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Bentvelsen, S; Beresford, L; Beretta, M; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernard, N R; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia Bylund, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Beven, A J; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blanco, J E; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boehler, M; Bogaerts, J A; Bogdanchikov, A G; Bohm, C; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brendlinger, K; Brennan, A J; Brenner, L; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Britzger, D; Brochu, F M; Brock, I; Brock, R; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Buchholz, P; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Buzykaev, R; Cabrera Urbán, S; Caforio, D; Cairo, V M; Cakir, O; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Camarri, P; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chang, P; Chapleau, B; Chapman, J D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cheremushkina, E; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Childers, J T; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choi, K; Chouridou, S; Chow, B K B; Christodoulou, V; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chuinard, A J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Cinca, D; Cindro, V; Cioara, I A; Ciocio, A; Citron, Z H; Ciubancan, M; Clark, A; Clark, B L; Clark, P J; Clarke, R N; Cleland, W; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Compostella, G; Conde Muiño, P; Coniavitis, E; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; D'Auria, S; D'Onofrio, M; Cunha Sargedas De Sousa, M J Da; Via, C Da; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Dandoy, J R; Dang, N P; Daniells, A C; Danninger, M; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Delgove, D; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; DeMarco, D A; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaconu, C; Diamond, M; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Diglio, S; Dimitrievska, A; Dingfelder, J; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Dobos, D; Dobre, M; Doglioni, C; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Drechsler, E; Dris, M; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Duschinger, D; Dyndal, M; Eckardt, C; Ecker, K M; Edgar, R C; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Elliot, A A; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Ernis, G; Ernst, J; Ernst, M; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Giannelli, M Faucci; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Martinez, P Fernandez; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, C; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Flowerdew, M J; Formica, A; Forti, A; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Francis, D; Franconi, L; Franklin, M; Fraternali, M; Freeborn, D; French, S T; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudiello, A; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Geisler, M P; Gemme, C; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giorgi, F M; Giorgi, F M; Giraud, P F; Giromini, P; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gkougkousis, E L; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Goujdami, D; Goussiou, A G; Govender, N; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Grout, Z J; Guan, L; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Haley, J; Hall, D; Halladjian, G; Hallewell, G D; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, M C; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harrington, R D; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauser, R; Hauswald, L; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hays, J M; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hetherly, J W; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillier, S J; Hinchliffe, I; Hines, E; Hinman, R R; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hohlfeld, M; Hohn, D; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hopkins, W H; Horii, Y; Horton, A J; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hrynevich, A; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, Q; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Idrissi, Z; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jabbar, S; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansky, R W; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jiggins, S; Jimenez Pena, J; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kahn, S J; Kajomovitz, E; Kalderon, C W; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneda, M; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karamaoun, A; Karastathis, N; Kareem, M J; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Keyes, R A; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharlamov, A G; Khoo, T J; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kim, Y; Kimura, N; Kind, O M; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kiuchi, K; Kivernyk, O; Kladiva, E; Klein, M H; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Kortner, O; Kortner, S; Kosek, T; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumeli-Charalampidi, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Krizka, K; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuehn, S; Kugel, A; Kuger, F; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunigo, T; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwan, T; Kyriazopoulos, D; La Rosa, A; La Rosa Navarro, J L; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, J C; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Manghi, F Lasagni; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeBlanc, M; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Leyko, A M; Leyton, M; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Liblong, A; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Long, B A; Long, J D; Long, R E; Looper, K A; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lösel, P J; Lou, X; Lounis, A; Love, J; Love, P A; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Macdonald, C M; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeland, S; Maeno, T; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Maiani, C; Maidantchik, C; Maier, A A; Maier, T; Maio, A; Majewski, S; Makida, Y; Makovec, N; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mancini, G; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mantoani, M; Mapelli, L; March, L; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, M; Martin-Haugh, S; Martoiu, V S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazza, S M; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Miglioranzi, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milesi, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Minaenko, A A; Minami, Y; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mitani, T; Mitrevski, J; Mitsou, V A; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Morinaga, M; Morisbak, V; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Mortensen, S S; Morton, A; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, R S P; Mueller, T; Muenstermann, D; Mullen, P; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagata, K; Nagel, M; Nagy, E; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Naranjo Garcia, R F; Narayan, R; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolopoulos, K; Nilsen, J K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nomachi, M; Nomidis, I; Nooney, T; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Oide, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Owen, R E; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Pan, Y B; Panagiotopoulou, E; Pandini, C E; Panduro Vazquez, J G; Pani, P; Panitkin, S; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parker, K A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Pauly, T; Pearce, J; Pearson, B; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perini, L; Pernegger, H; Perrella, S; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Pickering, M A; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Pluth, D; Poettgen, R; Poggioli, L; Pohl, D; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Prell, S; Price, D; Price, L E; Primavera, M; Prince, S; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Raddum, S; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Rangel-Smith, C; Rauscher, F; Rave, S; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reisin, H; Relich, M; Rembser, C; Ren, H; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Richter, S; Richter-Was, E; Ricken, O; Ridel, M; Rieck, P; Riegel, C J; Rieger, J; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ristić, B; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Saez, S M Romano; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, P; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Russell, H L; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sabato, G; Sacerdoti, S; Saddique, A; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Saimpert, M; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, C; Sandstroem, R; Sankey, D P C; Sannino, M; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sasaki, O; Sasaki, Y; Sato, K; Sauvage, G; Sauvan, E; Savage, G; Savard, P; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaeffer, J; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Schiavi, C; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schopf, E; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seema, P; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekhon, K; Sekula, S J; Selbach, K E; Seliverstov, D M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Sessa, M; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shaw, S M; Shcherbakova, A; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Saadi, D Shoaleh; Shochet, M J; Shojaii, S; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simon, D; Simoniello, R; Sinervo, P; Sinev, N B; Siragusa, G; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinner, M B; Skottowe, H P; Skubic, P; Slater, M; Slavicek, T; Slawinska, M; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, M N K; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosa, D; Sosebee, M; Sotiropoulou, C L; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spalla, M; Spanò, F; Spearman, W R; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Spreitzer, T; Denis, R D St; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, S; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tannenwald, B B; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Teoh, J J; Tepel, F; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thun, R P; Tibbetts, M J; Torres, R E Ticse; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Truong, L; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turra, R; Turvey, A J; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urban, J; Urquijo, P; Urrejola, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valderanis, C; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Velz, T; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; Wharton, A M; White, A; White, M J; White, R; White, S; Whiteson, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wildauer, A; Wilkens, H G; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wu, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wyatt, T R; Wynne, B M; Xella, S; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, Y; Yamamoto, A; Yamamoto, S; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, Y; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zalieckas, J; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zhang, D; Zhang, F; Zhang, J; Zhang, L; Zhang, R; Zhang, X; Zhang, Z; Zhao, X; Zhao, Y; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, C; Zhou, L; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, S; Zinonos, Z; Zinser, M; Ziolkowski, M; Živković, L; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zwalinski, L

    A search for a new resonance decaying to a W or Z boson and a Higgs boson in the [Formula: see text] final states is performed using 20.3 fb[Formula: see text] of pp collision data recorded at [Formula: see text] 8 TeV with the ATLAS detector at the Large Hadron Collider. The search is conducted by examining the WH  /  ZH invariant mass distribution for a localized excess. No significant deviation from the Standard Model background prediction is observed. The results are interpreted in terms of constraints on the Minimal Walking Technicolor model and on a simplified approach based on a phenomenological Lagrangian of Heavy Vector Triplets.

  3. Breakdown of the Migdal-Eliashberg theory: A determinant quantum Monte Carlo study

    DOE PAGES

    Esterlis, I.; Nosarzewski, B.; Huang, E. W.; ...

    2018-04-02

    The superconducting (SC) and charge-density-wave (CDW) susceptibilities of the two-dimensional Holstein model are computed using determinant quantum Monte Carlo, and compared with results computed using the Migdal-Eliashberg (ME) approach. We access temperatures as low as 25 times less than the Fermi energy, E F, which are still above the SC transition. We find that the SC susceptibility at low T agrees quantitatively with the ME theory up to a dimensionless electron-phonon coupling λ 0 ≈ 0.4 but deviates dramatically for larger λ 0. We find that for large λ 0 and small phonon frequency ω 0 << E F CDWmore » ordering is favored and the preferred CDW ordering vector is uncorrelated with any obvious feature of the Fermi surface.« less

  4. Breakdown of the Migdal-Eliashberg theory: A determinant quantum Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Esterlis, I.; Nosarzewski, B.; Huang, E. W.; Moritz, B.; Devereaux, T. P.; Scalapino, D. J.; Kivelson, S. A.

    2018-04-01

    The superconducting (SC) and charge-density-wave (CDW) susceptibilities of the two-dimensional Holstein model are computed using determinant quantum Monte Carlo, and compared with results computed using the Migdal-Eliashberg (ME) approach. We access temperatures as low as 25 times less than the Fermi energy, EF, which are still above the SC transition. We find that the SC susceptibility at low T agrees quantitatively with the ME theory up to a dimensionless electron-phonon coupling λ0≈0.4 but deviates dramatically for larger λ0. We find that for large λ0 and small phonon frequency ω0≪EF CDW ordering is favored and the preferred CDW ordering vector is uncorrelated with any obvious feature of the Fermi surface.

  5. Some limit theorems for ratios of order statistics from uniform random variables.

    PubMed

    Xu, Shou-Fang; Miao, Yu

    2017-01-01

    In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.

  6. Deviation from intention to treat analysis in randomised trials and treatment effect estimates: meta-epidemiological study.

    PubMed

    Abraha, Iosief; Cherubini, Antonio; Cozzolino, Francesco; De Florio, Rita; Luchetta, Maria Laura; Rimland, Joseph M; Folletti, Ilenia; Marchesi, Mauro; Germani, Antonella; Orso, Massimiliano; Eusebi, Paolo; Montedori, Alessandro

    2015-05-27

    To examine whether deviation from the standard intention to treat analysis has an influence on treatment effect estimates of randomised trials. Meta-epidemiological study. Medline, via PubMed, searched between 2006 and 2010; 43 systematic reviews of interventions and 310 randomised trials were included. From each year searched, random selection of 5% of intervention reviews with a meta-analysis that included at least one trial that deviated from the standard intention to treat approach. Basic characteristics of the systematic reviews and randomised trials were extracted. Information on the reporting of intention to treat analysis, outcome data, risk of bias items, post-randomisation exclusions, and funding were extracted from each trial. Trials were classified as: ITT (reporting the standard intention to treat approach), mITT (reporting a deviation from the standard approach), and no ITT (reporting no approach). Within each meta-analysis, treatment effects were compared between mITT and ITT trials, and between mITT and no ITT trials. The ratio of odds ratios was calculated (value <1 indicated larger treatment effects in mITT trials than in other trial categories). 50 meta-analyses and 322 comparisons of randomised trials (from 84 ITT trials, 118 mITT trials, and 108 no ITT trials; 12 trials contributed twice to the analysis) were examined. Compared with ITT trials, mITT trials showed a larger intervention effect (pooled ratio of odds ratios 0.83 (95% confidence interval 0.71 to 0.96), P=0.01; between meta-analyses variance τ(2)=0.13). Adjustments for sample size, type of centre, funding, items of risk of bias, post-randomisation exclusions, and variance of log odds ratio yielded consistent results (0.80 (0.69 to 0.94), P=0.005; τ(2)=0.08). After exclusion of five influential studies, results remained consistent (0.85 (0.75 to 0.98); τ(2)=0.08). The comparison between mITT trials and no ITT trials showed no statistical difference between the two groups (adjusted ratio of odds ratios 0.92 (0.70 to 1.23); τ(2)=0.57). Trials that deviated from the intention to treat analysis showed larger intervention effects than trials that reported the standard approach. Where an intention to treat analysis is impossible to perform, authors should clearly report who is included in the analysis and attempt to perform multiple imputations. © Abraha et al 2015.

  7. Computing 3-D wavefields in mantle circulations models to test hypotheses on the origin of lower mantle heterogeneity under Africa directly against seismic observations

    NASA Astrophysics Data System (ADS)

    Schuberth, Bernhard; Zaroli, Christophe; Nolet, Guust

    2015-04-01

    Of particular interest for the tectonic evolution of the Atlantic region is the influence of lower mantle structure under Africa on flow in the upper mantle beneath the ocean basin. Along with its Pacific counterpart, the large African anomaly in the lowermost mantle with strongly reduced seismic velocities has received considerable attention in seismological and geodynamic studies. Several seismological observations are typically taken as an indication that these two anomalies are being caused by large-scale compositional variations and that they are piles of material with higher density than normal mantle rock. This would imply negative buoyancy in the lowermost mantle under Africa, which has important implications for the flow at shallower depth and inferences on the processes that led to the formation of the Atlantic Ocean basin. However, a large number of recent studies argue for a strong thermal gradient across the core-mantle boundary that might provide an alternative explanation for the lower mantle anomaly through the resulting large lateral temperature variations. Recently, we developed a new joint forward modeling approach to test such geodynamic hypotheses directly against the seismic observations: Seismic heterogeneity is predicted by converting the temperature field of a high-resolution 3-D mantle circulation model into seismic velocities using thermodynamic models of mantle mineralogy. 3-D global wave propagation in the synthetic elastic structures is then simulated using a spectral element method. Being based on forward modelling only, this approach allows us to generate synthetic wavefields and seismograms independently of seismic observations. The statistics of observed long-period body wave traveltime variations show a markedly different behaviour for P- and S-waves: the standard deviation of P-wave delay times stays almost constant with ray turning depth, while that of the S-wave delay times increases strongly throughout the mantle. In an earlier study, we showed that synthetic traveltime variations computed for an isochemical mantle circulation model with strong core heating can reproduce these different trends. This was taken as a strong indication that seismic heterogeneity in the lower mantle is likely dominated by thermal variations on large length-scales (i.e., relevant for long-period body waves). We will discuss the robustness of this earlier conclusion by exploring the uncertainties in the mineralogical models used to convert temperatures to seismic velocities. In particular, we investigate the influence of anelasticity on the standard deviation of our synthetic traveltime variations. Owing to the differences in seismic frequency content between laboratory measurements (MHz to GHz) and the Earth (mHz to Hz), the seismic velocities given in the mineralogical model need to be adjusted; that is, corrected for dispersion due to anelastic effects.

  8. [Strabismus surgery in Grave's disease--dose-effect relationships and functional results].

    PubMed

    Schittkowski, M; Fichter, N; Guthoff, R

    2004-11-01

    Strabismus in thyroid ophthalmopathy is based on a loss of the contractility and distensibility of the external ocular muscles. Different therapeutic approaches are available, such as recession after pre-. or intraoperative measurement, adjustable sutures, antagonist resection, or contralateral synergist faden-operation. 26 patients with strabismus in thyroid ophthalmopathy were operated between 2000 and 2003. All patients were examined preoperatively, then 1 day and 3 - 6 months (maximum 36 months) postoperatively. Before proceeding with surgery, we waited at least 6 months after stabilization of ocular alignment and normalization of thyroid chemistry. Preoperative vertical deviation was 10-44 PD (mean 22), 3 months postoperatively it was 2-10 PD (mean 1.5). Recession of the fibrotic muscle leads to reproducible results: 3.98 +/- 0.52 PD vertical deviation/mm for the inferior rectus. In the case of a large preoperative deviation a correction should be expected, which might not be sufficient in the first few days or weeks; a second operation should not be carried out before 3 months. 7 patients were operated twice, 1 patient need three operations. 4 patients (preop. 0) achieved no double vision at all; 15 patients (preop. 1) had no double vision in the primary and reading positions; 3 patients (preop. 0) had no double vision with a maximum of 5 PD; 1 patient (preop. 7) had double vision in the primary or reading position even with prisms; and 2 patients (preop. 17) had double vision in every position. We advocate that recession of the restricted inferior or internal rectus muscle is precise, safe and effective in patients with thyroid ophthalmopathy. The recessed muscle should be fixed directly at the sclera to avoid late over-correction through a slipped muscle. The success rate in terms of binocular single vision was 76 % and 88 % with prisms added.

  9. Baryon-antibaryon dynamics in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Seifert, E.; Cassing, W.

    2018-04-01

    The dynamics of baryon-antibaryon annihilation and reproduction (B B ¯↔3 M ) is studied within the Parton-Hadron-String Dynamics (PHSD) transport approach for Pb+Pb and Au+Au collisions as a function of centrality from lower Super Proton Synchrotron (SPS) up to Large Hadron Collider (LHC) energies on the basis of the quark rearrangement model. At Relativistic Heavy-Ion Collider (RHIC) energies we find a small net reduction of baryon-antibaryon (B B ¯ ) pairs while for the LHC energy of √{sN N}=2.76 TeV a small net enhancement is found relative to calculations without annihilation (and reproduction) channels. Accordingly, the sizable difference between data and statistical calculations in Pb+Pb collisions at √{sN N}=2.76 TeV for proton and antiproton yields [ALICE Collaboration, B. Abelev et al., Phys. Rev. C 88, 044910 (2013), 10.1103/PhysRevC.88.044910], where a deviation of 2.7 σ was claimed by the ALICE Collaboration, should not be attributed to a net antiproton annihilation. This is in line with the observation that no substantial deviation between the data and statistical hadronization model (SHM) calculations is seen for antihyperons, since according to the PHSD analysis the antihyperons should be modified by the same amount as antiprotons. As the PHSD results for particle ratios are in line with the ALICE data (within error bars) this might point towards a deviation from statistical equilibrium in the hadronization (at least for protons and antiprotons). Furthermore, we find that the B B ¯↔3 M reactions are more effective at lower SPS energies where a net suppression for antiprotons and antihyperons up to a factor of 2-2.5 can be extracted from the PHSD calculations for central Au+Au collisions.

  10. Radiotherapy quality assurance report from children's oncology group AHOD0031

    PubMed Central

    Dharmarajan, Kavita V.; Friedman, Debra L.; FitzGerald, T.J.; McCarten, Kathleen M.; Constine, Louis S.; Chen, Lu; Kessel, Sandy K.; Iandoli, Matt; Laurie, Fran; Schwartz, Cindy L.; Wolden, Suzanne L.

    2016-01-01

    Purpose A phase III trial assessing response-based therapy in intermediate-risk Hodgkin lymphoma, mandated real-time central review of involved field radiotherapy and imaging records by a centralized review center to maximize protocol compliance. We report the impact of centralized radiotherapy review upon protocol compliance. Methods Review of simulation films, port films, and dosimetry records was required pre-treatment and after treatment completion. Records were reviewed by study-affiliated or review center-affiliated radiation oncologists. A 6–10% deviation from protocol-specified dose was scored as “minor”; >10% was “major”. A volume deviation was scored as “minor” if margins were less than specified, or “major” if fields transected disease-bearing areas. Interventional review and final compliance review scores were assigned to each radiotherapy case and compared. Results Of 1712 patients enrolled, 1173 underwent IFRT at 256 institutions in 7 countries. An interventional review was performed in 88% and a final review in 98%. Overall, minor and major deviations were found in 12% and 6%, respectively. Among the cases for which ≥ 1 pre-IFRT modification was requested by QARC and subsequently made by the treating institution, 100% were made compliant on final review. In contrast, among the cases for which ≥ 1 modification was requested but not made by the treating institution, 10% were deemed compliant on final review. Conclusion In a large trial with complex treatment pathways and heterogeneous radiotherapy fields, central review was performed in a large percentage of cases pre-IFRT and identified frequent potential deviations in a timely manner. When suggested modifications were performed by the institutions, deviations were almost eliminated. PMID:25670539

  11. A more accurate method using MOVES (Motor Vehicle Emission Simulator) to estimate emission burden for regional-level analysis.

    PubMed

    Liu, Xiaobo

    2015-07-01

    The U.S. Environmental Protection Agency's (EPA) Motor Vehicle Emission Simulator (MOVES) is required by the EPA to replace Mobile 6 as an official on-road emission model. Incorporated with annual vehicle mile traveled (VMT) by Highways Performance Monitoring System (HPMS) vehicle class, MOVES allocates VMT from HPMS to MOVES source (vehicle) types and calculates emission burden by MOVES source type. However, the calculated running emission burden by MOVES source type may be deviated from the actual emission burden because of MOVES source population, specifically the population fraction by MOVES source type in HPMS vehicle class. The deviation is also the result of the use of the universal set of parameters, i.e., relative mileage accumulation rate (relativeMAR), packaged in MOVES default database. This paper presents a novel approach by adjusting the relativeMAR to eliminate the impact of MOVES source population on running exhaust emission and to keep start and evaporative emissions unchanged for both MOVES2010b and MOVES2014. Results from MOVES runs using this approach indicated significant improvements on VMT distribution and emission burden estimation for each MOVES source type. The deviation of VMT by MOVES source type is minimized by using this approach from 12% to less than 0.05% for MOVES2010b and from 50% to less than 0.2% for MOVES2014 except for MOVES source type 53. Source type 53 still remains about 30% variation. The improvement of VMT distribution results in the elimination of emission burden deviation for each MOVES source type. For MOVES2010b, the deviation of emission burdens decreases from -12% for particulate matter less than 2.5 μm (PM2.5) and -9% for carbon monoxide (CO) to less than 0.002%. For MOVES2014, it drops from 80% for CO and 97% for PM2.5 to 0.006%. This approach is developed to more accurately estimate the total emission burdens using EPA's MOVES, both MOVES2010b and MOVES2014, by redistributing vehicle mile traveled (VMT) by Highways Performance Monitoring System (HPMS) class to MOVES source type on the basis of comprehensive traffic study, local link-by-link VMT broken down into MOVES source type.

  12. Possibilities of inversion of satellite third-order gravitational tensor onto gravity anomalies: a case study for central Europe

    NASA Astrophysics Data System (ADS)

    Pitoňák, Martin; Šprlák, Michal; Tenzer, Robert

    2017-05-01

    We investigate a numerical performance of four different schemes applied to a regional recovery of the gravity anomalies from the third-order gravitational tensor components (assumed to be observable in the future) synthetized at the satellite altitude of 200 km above the mean sphere. The first approach is based on applying a regional inversion without modelling the far-zone contribution or long-wavelength support. In the second approach we separate integral formulas into two parts, that is, the effects of the third-order disturbing tensor data within near and far zones. Whereas the far-zone contribution is evaluated by using existing global geopotential model (GGM) with spectral weights given by truncation error coefficients, the near-zone contribution is solved by applying a regional inversion. We then extend this approach for a smoothing procedure, in which we remove the gravitational contributions of the topographic-isostatic and atmospheric masses. Finally, we apply the remove-compute-restore (r-c-r) scheme in order to reduce the far-zone contribution by subtracting the reference (long-wavelength) gravity field, which is computed for maximum degree 80. We apply these four numerical schemes to a regional recovery of the gravity anomalies from individual components of the third-order gravitational tensor as well as from their combinations, while applying two different levels of a white noise. We validated our results with respect to gravity anomalies evaluated at the mean sphere from EGM2008 up to the degree 250. Not surprisingly, better fit in terms of standard deviation (STD) was attained using lower level of noise. The worst results were gained applying classical approach, STD values of our solution from Tzzz are 1.705 mGal (noise value with a standard deviation 0.01 × 10 - 15m - 1s - 2) and 2.005 mGal (noise value with a standard deviation 0.05 × 10 - 15m - 1s - 2), while the superior from r-c-r up to the degree 80, STD fit of gravity anomalies from Tzzz with respect to the same counterpart from EGM2008 is 0.510 mGal (noise value with a standard deviation 0.01 × 10 - 15m - 1s - 2) and 1.190 mGal (noise value with a standard deviation 0.05 × 10 - 15m - 1s - 2).

  13. Hurricane track forecast cones from fluctuations

    PubMed Central

    Meuel, T.; Prado, G.; Seychelles, F.; Bessafi, M.; Kellay, H.

    2012-01-01

    Trajectories of tropical cyclones may show large deviations from predicted tracks leading to uncertainty as to their landfall location for example. Prediction schemes usually render this uncertainty by showing track forecast cones representing the most probable region for the location of a cyclone during a period of time. By using the statistical properties of these deviations, we propose a simple method to predict possible corridors for the future trajectory of a cyclone. Examples of this scheme are implemented for hurricane Ike and hurricane Jimena. The corridors include the future trajectory up to at least 50 h before landfall. The cones proposed here shed new light on known track forecast cones as they link them directly to the statistics of these deviations. PMID:22701776

  14. Analysis of DGPS/INS and MLS/INS final approach navigation errors and control performance data

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.; Spitzer, Cary R.

    1992-01-01

    Flight tests were conducted jointly by NASA Langley Research Center and Honeywell, Inc., on a B-737 research aircraft to record a data base for evaluating the performance of a differential DGPS/inertial navigation system (INS) which used GPS Course/Acquisition code receivers. Estimates from the DGPS/INS and a Microwave Landing System (MLS)/INS, and various aircraft parameter data were recorded in real time aboard the aircraft while flying along the final approach path to landing. This paper presents the mean and standard deviation of the DGPS/INS and MLS/INS navigation position errors computed relative to the laser tracker system and of the difference between the DGPS/INS and MLS/INS velocity estimates. RMS errors are presented for DGPS/INS and MLS/INS guidance errors (localizer and glideslope). The mean navigation position errors and standard deviation of the x position coordinate of the DGPS/INS and MLS/INS systems were found to be of similar magnitude while the standard deviation of the y and z position coordinate errors were significantly larger for DGPS/INS compared to MLS/INS.

  15. Approaches for springback reduction when forming ultra high-strength sheet metals

    NASA Astrophysics Data System (ADS)

    Radonjic, R.; Liewald, M.

    2016-11-01

    Nowadays, the automotive industry is challenged constantly by increasing environmental regulations and the continuous enhancement of standards with regard to passenger's safety (NCAP, Part 1). In order to fulfil the aforementioned requirements, the use of ultra high-strength steels in research and industrial applications is of high interest. When forming such materials, the main problem results from the large amount of springback which occurs after the release of the part. This paper shows the applicability of several approaches for the reduction of springback amount by forming of one hat channel shaped component. A novel approach for springack reduction which is based on forming with an alternating blank draw-in is presented as well. In this investigation an ultra high-strength steel of the grade DP 980 was used. The part's measurements were taken at significant cross-sections in order to provide a qualitative comparison between the reference geometry and the part's released shape. The obtained results were analysed and used in order to quantify the success of particular approaches for springback reduction. When taking a curved hat channel shaped component as an example, the results achieved in the investigations showed that it is possible to reduce part shape deviations significantly when using DP 980 as workpiece material.

  16. Dense image registration through MRFs and efficient linear programming.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos

    2008-12-01

    In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.

  17. Correction of Bowtie-Filter Normalization and Crescent Artifacts for a Clinical CBCT System.

    PubMed

    Zhang, Hong; Kong, Vic; Huang, Ke; Jin, Jian-Yue

    2017-02-01

    To present our experiences in understanding and minimizing bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a clinical cone beam computed tomography system. Bowtie-filter position and profile variations during gantry rotation were studied. Two previously proposed strategies (A and B) were applied to the clinical cone beam computed tomography system to correct bowtie-filter crescent artifacts. Physical calibration and analytical approaches were used to minimize the norm phantom misalignment and to correct for bowtie-filter normalization artifacts. A combined procedure to reduce bowtie-filter crescent artifacts and bowtie-filter normalization artifacts was proposed and tested on a norm phantom, CatPhan, and a patient and evaluated using standard deviation of Hounsfield unit along a sampling line. The bowtie-filter exhibited not only a translational shift but also an amplitude variation in its projection profile during gantry rotation. Strategy B was better than strategy A slightly in minimizing bowtie-filter crescent artifacts, possibly because it corrected the amplitude variation, suggesting that the amplitude variation plays a role in bowtie-filter crescent artifacts. The physical calibration largely reduced the misalignment-induced bowtie-filter normalization artifacts, and the analytical approach further reduced bowtie-filter normalization artifacts. The combined procedure minimized both bowtie-filter crescent artifacts and bowtie-filter normalization artifacts, with Hounsfield unit standard deviation being 63.2, 45.0, 35.0, and 18.8 Hounsfield unit for the best correction approaches of none, bowtie-filter crescent artifacts, bowtie-filter normalization artifacts, and bowtie-filter normalization artifacts + bowtie-filter crescent artifacts, respectively. The combined procedure also demonstrated reduction of bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a CatPhan and a patient. We have developed a step-by-step procedure that can be directly used in clinical cone beam computed tomography systems to minimize both bowtie-filter crescent artifacts and bowtie-filter normalization artifacts.

  18. 13C tracer experiments and metabolite balancing for metabolic flux analysis: comparing two approaches

    PubMed

    Schmidt; Marx; de Graaf AA; Wiechert; Sahm; Nielsen; Villadsen

    1998-04-05

    Conventional metabolic flux analysis uses the information gained from determination of measurable fluxes and a steady-state assumption for intracellular metabolites to calculate the metabolic fluxes in a given metabolic network. The determination of intracellular fluxes depends heavily on the correctness of the assumed stoichiometry including the presence of all reactions with a noticeable impact on the model metabolite balances. Determination of fluxes in complex metabolic networks often requires the inclusion of NADH and NADPH balances, which are subject to controversial debate. Transhydrogenation reactions that transfer reduction equivalents from NADH to NADPH or vice versa can usually not be included in the stoichiometric model, because they result in singularities in the stoichiometric matrix. However, it is the NADPH balance that, to a large extent, determines the calculated flux through the pentose phosphate pathway. Hence, wrong assumptions on the presence or activity of transhydrogenation reactions will result in wrong estimations of the intracellular flux distribution. Using 13C tracer experiments and NMR analysis, flux analysis can be performed on the basis of only well established stoichiometric equations and measurements of the labeling state of intracellular metabolites. Neither NADH/NADPH balancing nor assumptions on energy yields need to be included to determine the intracellular fluxes. Because metabolite balancing methods and the use of 13C labeling measurements are two different approaches to the determination of intracellular fluxes, both methods can be used to verify each other or to discuss the origin and significance of deviations in the results. Flux analysis based entirely on metabolite balancing and flux analysis, including labeling information, have been performed independently for a wild-type strain of Aspergillus oryzae producing alpha-amylase. Two different nitrogen sources, NH4+ and NO3-, have been used to investigate the influence of the NADPH requirements on the intracellular flux distribution. The two different approaches to the calculation of fluxes are compared and deviations in the results are discussed. Copyright 1998 John Wiley & Sons, Inc.

  19. A model of curved saccade trajectories: spike rate adaptation in the brainstem as the cause of deviation away.

    PubMed

    Kruijne, Wouter; Van der Stigchel, Stefan; Meeter, Martijn

    2014-03-01

    The trajectory of saccades to a target is often affected whenever there is a distractor in the visual field. Distractors can cause a saccade to deviate towards their location or away from it. The oculomotor mechanisms that produce deviation towards distractors have been thoroughly explored in behavioral, neurophysiological and computational studies. The mechanisms underlying deviation away, on the other hand, remain unclear. Behavioral findings suggest a mechanism of spatially focused, top-down inhibition in a saccade map, and deviation away has become a tool to investigate such inhibition. However, this inhibition hypothesis has little neuroanatomical or neurophysiological support, and recent findings go against it. Here, we propose that deviation away results from an unbalanced saccade drive from the brainstem, caused by spike rate adaptation in brainstem long-lead burst neurons. Adaptation to stimulation in the direction of the distractor results in an unbalanced drive away from it. An existing model of the saccade system was extended with this theory. The resulting model simulates a wide range of findings on saccade trajectories, including findings that have classically been interpreted to support inhibition views. Furthermore, the model replicated the effect of saccade latency on deviation away, but predicted this effect would be absent with large (400 ms) distractor-target onset asynchrony. This prediction was confirmed in an experiment, which demonstrates that the theory both explains classical findings on saccade trajectories and predicts new findings. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Effect of visuospatial neglect on spatial navigation and heading after stroke.

    PubMed

    Aravind, Gayatri; Lamontagne, Anouk

    2017-06-09

    Visuospatial neglect (VSN) impairs the control of locomotor heading in post-stroke individuals, which may affect their ability to safely avoid moving objects while walking. We aimed to compare VSN+ and VSN- stroke individuals in terms of changes in heading and head orientation in space while avoiding obstacles approaching from different directions and reorienting toward the final target. Stroke participants with VSN (VSN+) and without VSN (VSN-) walked in a virtual environment avoiding obstacles that approached contralesionally, head-on or ipsilesionally. Measures of obstacle avoidance (onset-of-heading change, maximum mediolateral deviation) and target alignment (heading and head-rotation errors with respect to target) were compared across groups and obstacle directions. In total, 26 participants with right-hemisphere stroke participated (13 VSN+ and 13 VSN-; 24 males; mean age 60.3 years, range 48 to 72 years). A larger proportion of VSN+ (75%) than VSN- (38%) participants collided with contralesional and head-on obstacles. For VSN- participants, deviating to the same side as the obstacle was a safe strategy to avoid diagonal obstacles and deviating to the opposite-side led to occasional collisions. VSN+ participants deviated ipsilesionally, displaying same-side and opposite-side strategies for ipsilesional and contralesional obstacles, respectively. Overall, VSN+ participants showed greater distances at onset-of-heading change, smaller maximum mediolateral deviation and larger errors in target alignment as compared with VSN- participants. The ipsilesional bias arising from VSN influences the modulation of heading in response to obstacles and, along with the adoption of the "riskier" strategies, contribute to the higher number colliders and poor goal-directed walking abilities in stroke survivors with VSN. Future research should focus on developing assessment and training tools for complex locomotor tasks such as obstacle avoidance in this population. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  1. Continuous-time random-walk approach to supercooled liquids: Self-part of the van Hove function and related quantities.

    PubMed

    Helfferich, J; Brisch, J; Meyer, H; Benzerara, O; Ziebert, F; Farago, J; Baschnagel, J

    2018-06-01

    From equilibrium molecular dynamics (MD) simulations of a bead-spring model for short-chain glass-forming polymer melts we calculate several quantities characterizing the single-monomer dynamics near the (extrapolated) critical temperature [Formula: see text] of mode-coupling theory: the mean-square displacement g 0 (t), the non-Gaussian parameter [Formula: see text] and the self-part of the van Hove function [Formula: see text] which measures the distribution of monomer displacements r in time t. We also determine these quantities from a continuous-time random walk (CTRW) approach. The CTRW is defined in terms of various probability distributions which we know from previous analysis. Utilizing these distributions the CTRW can be solved numerically and compared to the MD data with no adjustable parameter. The MD results reveal the heterogeneous and non-Gaussian single-particle dynamics of the supercooled melt near [Formula: see text]. In the time window of the early [Formula: see text] relaxation [Formula: see text] is large and [Formula: see text] is broad, reflecting the coexistence of monomer displacements that are much smaller ("slow particles") and much larger ("fast particles") than the average at time t, i.e. than [Formula: see text]. For large r the tail of [Formula: see text] is compatible with an exponential decay, as found for many glassy systems. The CTRW can reproduce the spatiotemporal dependence of [Formula: see text] at a qualitative to semiquantitative level. However, it is not quantitatively accurate in the studied temperature regime, although the agreement with the MD data improves upon cooling. In the early [Formula: see text] regime we also analyze the MD results for [Formula: see text] via the space-time factorization theorem predicted by ideal mode-coupling theory. While we find the factorization to be well satisfied for small r, both above and below [Formula: see text] , deviations occur for larger r comprising the tail of [Formula: see text]. The CTRW analysis suggests that single-particle "hops" are a contributing factor for these deviations.

  2. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    NASA Astrophysics Data System (ADS)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by using truncated spectra or model spectra as the input. Analyses show that most of the SGS statistics agree well with those from MTLM fields with DNS spectra as the input. For the mean SGS energy dissipation, some significant deviation is observed. However, it is shown that the deviation can be parametrized by the input energy spectrum, which demonstrates the robustness of the MTLM procedure.

  3. Tracing kinematic (mis)alignments in CALIFA merging galaxies. Stellar and ionized gas kinematic orientations at every merger stage

    NASA Astrophysics Data System (ADS)

    Barrera-Ballesteros, J. K.; García-Lorenzo, B.; Falcón-Barroso, J.; van de Ven, G.; Lyubenova, M.; Wild, V.; Méndez-Abreu, J.; Sánchez, S. F.; Marquez, I.; Masegosa, J.; Monreal-Ibero, A.; Ziegler, B.; del Olmo, A.; Verdes-Montenegro, L.; García-Benito, R.; Husemann, B.; Mast, D.; Kehrig, C.; Iglesias-Paramo, J.; Marino, R. A.; Aguerri, J. A. L.; Walcher, C. J.; Vílchez, J. M.; Bomans, D. J.; Cortijo-Ferrero, C.; González Delgado, R. M.; Bland-Hawthorn, J.; McIntosh, D. H.; Bekeraitė, S.

    2015-10-01

    We present spatially resolved stellar and/or ionized gas kinematic properties for a sample of 103 interacting galaxies, tracing all merger stages: close companions, pairs with morphological signatures of interaction, and coalesced merger remnants. In order to distinguish kinematic properties caused by a merger event from those driven by internal processes, we compare our galaxies with a control sample of 80 non-interacting galaxies. We measure for both the stellar and the ionized gas components the major (projected) kinematic position angles (PAkin, approaching and receding) directly from the velocity distributions with no assumptions on the internal motions. This method also allow us to derive the deviations of the kinematic PAs from a straight line (δPAkin). We find that around half of the interacting objects show morpho-kinematic PA misalignments that cannot be found in the control sample. In particular, we observe those misalignments in galaxies with morphological signatures of interaction. On the other hand, thelevel of alignment between the approaching and receding sides for both samples is similar, with most of the galaxies displaying small misalignments. Radial deviations of the kinematic PA orientation from a straight line in the stellar component measured by δPAkin are large for both samples. However, for a large fraction of interacting galaxies the ionized gas δPAkin is larger than the typical values derived from isolated galaxies (48%), indicating that this parameter is a good indicator to trace the impact of interaction and mergers in the internal motions of galaxies. By comparing the stellar and ionized gas kinematic PA, we find that 42% (28/66) of the interacting galaxies have misalignments larger than 16°, compared to 10% from the control sample. Our results show the impact of interactions in the motion of stellar and ionized gas as well as the wide the variety of their spatially resolved kinematic distributions. This study also provides a local Universe benchmark for kinematic studies in merging galaxies at high redshift. Appendices are available in electronic form at http://www.aanda.org

  4. Permeability and Strength Measurements on Sintered, Porous, Hollow Turbine Blades Made by the American Electro Metal Corporation under Office of Naval Research Contract N-ONR-295 (01)

    NASA Technical Reports Server (NTRS)

    Richards, Hadley T.; Livingood, N.B.

    1954-01-01

    An experimental investigation was made to determine the permeability and strength characteristics of a number of sintered, porous, hollow turbine rotor blades and to determine the effectiveness of the blade fabrication method on permeability control. The test blades were fabricated by the American Electro Metal Corporation under a contract with the Office of Naval Research, Department of the Navy, and were submitted to the NACA for testing. Of the 22 test blades submitted, ten were sintered but not coined, five were sintered and coined, and seven were sintered and not coined but contained perforated reinforcements integral with the blade shells. Representative samples of each group of blades were tested. Large variations in permeability in both chordwise and spanwise directions were found. Local deviations as large as 155 to -85 percent from prescribed values were found in chordwise permeability. Only one blade, an uncoined one, had a chordwise permeability variations which reasonably approached that specified. Even for this blade, local deviations exceeded 10 percent. Spanwise permeability, specified to be held constant, varied as much as 50 percent from root to tip for both an uncoined and a coined blade. Previous NACA analyses have shown that in order to maintain proper control of blade wall temperatures, permeability variations must not exceed plus or minus 10 percent. Satisfactory control of permeability in either the chordwise or the spanwise direction was not achieved in the blades tested. Spin tests made at room temperature for six blades revealed the highest material rupture strength to be 8926 pounds per square inch. This value is about one third the strength required for rotor blades in present-day turbojet engines. The lowest value of blade strength was 1436 pounds per square inch.

  5. Validation of Cross Sections with Criticality Experiment and Reaction Rates: the Neptunium Case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Berthier, B.; Le Naour, C.; Stéphan, C.; Paradela, C.; Tarrío, D.; Duran, I.

    2014-04-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurements the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of the n_TOF data, we considered a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by uranium highly enriched in 235U so as to approach criticality with fast neutrons. The multiplication factor keff of the calculation is in better agreement with the experiment when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explored the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. The large modification needed to reduce the deviation seems to be incompatible with existing inelastic cross section measurements. Also we show that the νbar of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  6. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  7. Markov state models from short non-equilibrium simulations—Analysis and correction of estimation bias

    NASA Astrophysics Data System (ADS)

    Nüske, Feliks; Wu, Hao; Prinz, Jan-Hendrik; Wehmeyer, Christoph; Clementi, Cecilia; Noé, Frank

    2017-03-01

    Many state-of-the-art methods for the thermodynamic and kinetic characterization of large and complex biomolecular systems by simulation rely on ensemble approaches, where data from large numbers of relatively short trajectories are integrated. In this context, Markov state models (MSMs) are extremely popular because they can be used to compute stationary quantities and long-time kinetics from ensembles of short simulations, provided that these short simulations are in "local equilibrium" within the MSM states. However, over the last 15 years since the inception of MSMs, it has been controversially discussed and not yet been answered how deviations from local equilibrium can be detected, whether these deviations induce a practical bias in MSM estimation, and how to correct for them. In this paper, we address these issues: We systematically analyze the estimation of MSMs from short non-equilibrium simulations, and we provide an expression for the error between unbiased transition probabilities and the expected estimate from many short simulations. We show that the unbiased MSM estimate can be obtained even from relatively short non-equilibrium simulations in the limit of long lag times and good discretization. Further, we exploit observable operator model (OOM) theory to derive an unbiased estimator for the MSM transition matrix that corrects for the effect of starting out of equilibrium, even when short lag times are used. Finally, we show how the OOM framework can be used to estimate the exact eigenvalues or relaxation time scales of the system without estimating an MSM transition matrix, which allows us to practically assess the discretization quality of the MSM. Applications to model systems and molecular dynamics simulation data of alanine dipeptide are included for illustration. The improved MSM estimator is implemented in PyEMMA of version 2.3.

  8. Resolution of the COBE Earth sensor anomaly

    NASA Technical Reports Server (NTRS)

    Sedler, J.

    1993-01-01

    Since its launch on November 18, 1989, the Earth sensors on the Cosmic Background Explorer (COBE) have shown much greater noise than expected. The problem was traced to an error in Earth horizon acquisition-of-signal (AOS) times. Due to this error, the AOS timing correction was ignored, causing Earth sensor split-to-index (SI) angles to be incorrectly time-tagged to minor frame synchronization times. Resulting Earth sensor residuals, based on gyro-propagated fine attitude solutions, were as large as plus or minus 0.45 deg (much greater than plus or minus 0.10 deg from scanner specifications (Reference 1)). Also, discontinuities in single-frame coarse attitude pitch and roll angles (as large as 0.80 and 0.30 deg, respectively) were noted several times during each orbit. However, over the course of the mission, each Earth sensor was observed to independently and unexpectedly reset and then reactivate into a new configuration. Although the telemetered AOS timing corrections are still in error, a procedure has been developed to approximate and apply these corrections. This paper describes the approach, analysis, and results of approximating and applying AOS timing adjustments to correct Earth scanner data. Furthermore, due to the continuing degradation of COBE's gyroscopes, gyro-propagated fine attitude solutions may soon become unavailable, requiring an alternative method for attitude determination. By correcting Earth scanner AOS telemetry, as described in this paper, more accurate single-frame attitude solutions are obtained. All aforementioned pitch and roll discontinuities are removed. When proper AOS corrections are applied, the standard deviation of pitch residuals between coarse attitude and gyro-propagated fine attitude solutions decrease by a factor of 3. Also, the overall standard deviation of SI residuals from fine attitude solutions decrease by a factor of 4 (meeting sensor specifications) when AOS corrections are applied.

  9. Estimation of Coast-Wide Population Trends of Marbled Murrelets in Canada Using a Bayesian Hierarchical Model

    PubMed Central

    Schroeder, Bernard K.; Lindsay, David J.; Faust, Deborah A.

    2015-01-01

    Species at risk with secretive breeding behaviours, low densities, and wide geographic range pose a significant challenge to conservation actions because population trends are difficult to detect. Such is the case with the Marbled Murrelet (Brachyramphus marmoratus), a seabird listed as ‘Threatened’ by the Species at Risk Act in Canada largely due to the loss of its old growth forest nesting habitat. We report the first estimates of population trend of Marbled Murrelets in Canada derived from a monitoring program that uses marine radar to detect birds as they enter forest watersheds during 923 dawn surveys at 58 radar monitoring stations within the six Marbled Murrelet Conservation Regions on coastal British Columbia, Canada, 1996–2013. Temporal trends in radar counts were analyzed with a hierarchical Bayesian multivariate modeling approach that controlled for variation in tilt of the radar unit and day of year, included year-specific deviations from the overall trend (‘year effects’), and allowed for trends to be estimated at three spatial scales. A negative overall trend of -1.6%/yr (95% credibility interval: -3.2%, 0.01%) indicated moderate evidence for a coast-wide decline, although trends varied strongly among the six conservation regions. Negative annual trends were detected in East Vancouver Island (-9%/yr) and South Mainland Coast (-3%/yr) Conservation Regions. Over a quarter of the year effects were significantly different from zero, and the estimated standard deviation in common-shared year effects between sites within each region was about 50% per year. This large common-shared interannual variation in counts may have been caused by regional movements of birds related to changes in marine conditions that affect the availability of prey. PMID:26258803

  10. Recursive utility in a Markov environment with stochastic growth

    PubMed Central

    Hansen, Lars Peter; Scheinkman, José A.

    2012-01-01

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428

  11. Shapes of strong shock fronts in an inhomogeneous solar wind

    NASA Technical Reports Server (NTRS)

    Heinemann, M. A.; Siscoe, G. L.

    1974-01-01

    The shapes expected for solar-flare-produced strong shock fronts in the solar wind have been calculated, large-scale variations in the ambient medium being taken into account. It has been shown that for reasonable ambient solar wind conditions the mean and the standard deviation of the east-west shock normal angle are in agreement with experimental observations including shocks of all strengths. The results further suggest that near a high-speed stream it is difficult to distinguish between corotating shocks and flare-associated shocks on the basis of the shock normal alone. Although the calculated shapes are outside the range of validity of the linear approximation, these results indicate that the variations in the ambient solar wind may account for large deviations of shock normals from the radial direction.

  12. Excitation laser energy dependence of surface-enhanced fluorescence showing plasmon-induced ultrafast electronic dynamics in dye molecules

    NASA Astrophysics Data System (ADS)

    Itoh, Tamitake; Yamamoto, Yuko S.; Tamaru, Hiroharu; Biju, Vasudevanpillai; Murase, Norio; Ozaki, Yukihiro

    2013-06-01

    We find unique properties accompanying surface-enhanced fluorescence (SEF) from dye molecules adsorbed on Ag nanoparticle aggregates, which generate surface-enhanced Raman scattering. The properties are observed in excitation laser energy dependence of SEF after excluding plasmonic spectral modulation in SEF. The unique properties are large blue shifts of fluorescence spectra, deviation of ratios between anti-Stokes SEF intensity and Stokes from those of normal fluorescence, super-broadening of Stokes spectra, and returning to original fluorescence by lower energy excitation. We elucidate that these properties are induced by electromagnetic enhancement of radiative decay rates exceeding the vibrational relaxation rates within an electronic excited state, which suggests that molecular electronic dynamics in strong plasmonic fields can be largely deviated from that in free space.

  13. Large deviation analysis of a simple information engine

    NASA Astrophysics Data System (ADS)

    Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.

    2015-11-01

    Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.

  14. Recursive utility in a Markov environment with stochastic growth.

    PubMed

    Hansen, Lars Peter; Scheinkman, José A

    2012-07-24

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

  15. An individual-based approach to SIR epidemics in contact networks.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2011-08-21

    Many approaches have recently been proposed to model the spread of epidemics on networks. For instance, the Susceptible/Infected/Recovered (SIR) compartmental model has successfully been applied to different types of diseases that spread out among humans and animals. When this model is applied on a contact network, the centrality characteristics of the network plays an important role in the spreading process. However, current approaches only consider an aggregate representation of the network structure, which can result in inaccurate analysis. In this paper, we propose a new individual-based SIR approach, which considers the whole description of the network structure. The individual-based approach is built on a continuous time Markov chain, and it is capable of evaluating the state probability for every individual in the network. Through mathematical analysis, we rigorously confirm the existence of an epidemic threshold below which an epidemic does not propagate in the network. We also show that the epidemic threshold is inversely proportional to the maximum eigenvalue of the network. Additionally, we study the role of the whole spectrum of the network, and determine the relationship between the maximum number of infected individuals and the set of eigenvalues and eigenvectors. To validate our approach, we analytically study the deviation with respect to the continuous time Markov chain model, and we show that the new approach is accurate for a large range of infection strength. Furthermore, we compare the new approach with the well-known heterogeneous mean field approach in the literature. Ultimately, we support our theoretical results through extensive numerical evaluations and Monte Carlo simulations. Published by Elsevier Ltd.

  16. How Recent History Affects Perception: The Normative Approach and Its Heuristic Approximation

    PubMed Central

    Raviv, Ofri; Ahissar, Merav; Loewenstein, Yonatan

    2012-01-01

    There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the “contraction bias”, in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants' performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments. PMID:23133343

  17. Multi-temporal thermal analyses for submarine groundwater discharge (SGD) detection over large spatial scales in the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hennig, Hanna; Mallast, Ulf; Merz, Ralf

    2015-04-01

    Submarine groundwater discharge (SGD) sites act as important pathways for nutrients and contaminants that deteriorate marine ecosystems. In the Mediterranean it is estimated that 75% of freshwater input is contributed from karst aquifers. Thermal remote sensing can be used for a pre-screening of potential SGD sites in order to optimize field surveys. Although different platforms (ground-, air- and spaceborne) may serve for thermal remote sensing, the most cost-effective are spaceborne platforms (satellites) that likewise cover the largest spatial scale (>100 km per image). Therefore an automatized and objective approach that uses thermal satellite images from Landsat 7 and Landsat 8 was used to localize potential SGD sites on a large spatial scale. The method using descriptive statistic parameter specially range and standard deviation by (Mallast et al., 2014) was adapted to the Mediterranean Sea. Since the method was developed for the Dead Sea were satellite images with cloud cover are rare and no sea level change occurs through tidal cycles it was essential to adapt the method to a region where tidal cycles occur and cloud cover is more frequent . These adaptations include: (1) an automatic and adaptive coastline detection (2) include and process cloud covered scenes to enlarge the data basis, (3) implement tidal data in order to analyze low tide images as SGD is enhanced during these phases and (4) test the applicability for Landsat 8 images that will provide data in the future once Landsat 7 stops working. As previously shown, the range method shows more accurate results compared to the standard deviation. However, the result exclusively depends on two scenes (minimum and maximum) and is largely influenced by outliers. Counteracting on this drawback we developed a new approach. Since it is assumed that sea surface temperature (SST) is stabilized by groundwater at SGD sites, the slope of a bootstrapped linear model fitted to sorted SST per pixel would be less steep than the slope of the surrounding area, resulting in less influence through outliers and an equal weighting of all integrated scenes. Both methods could be used to detect SGD sites in the Mediterranean regardless to the discharge characteristics (diffuse and focused) exceptions are sites with deep emergences. Better results could be shown in bays compared to more exposed sites. Since the range of the SST is mostly influenced by maximum and minimum of the scenes, the slope approach can be seen as a more representative method using all scenes. References: Mallast, U., Gloaguen, R., Friesen, J., Rödiger, T., Geyer, S., Merz, R., Siebert, C., 2014. How to identify groundwater-caused thermal anomalies in lakes based on multi-temporal satellite data in semi-arid regions. Hydrol. Earth Syst. Sci. 18 (7), 2773-2787.

  18. Automated control of robotic camera tacheometers for measurements of industrial large scale objects

    NASA Astrophysics Data System (ADS)

    Heimonen, Teuvo; Leinonen, Jukka; Sipola, Jani

    2013-04-01

    The modern robotic tacheometers equipped with digital cameras (called also imaging total stations) and capable to measure reflectorless offer new possibilities to gather 3d data. In this paper an automated approach for the tacheometer measurements needed in the dimensional control of industrial large scale objects is proposed. There are two new contributions in the approach: the automated extraction of the vital points (i.e. the points to be measured) and the automated fine aiming of the tacheometer. The proposed approach proceeds through the following steps: First the coordinates of the vital points are automatically extracted from the computer aided design (CAD) data. The extracted design coordinates are then used to aim the tacheometer to point out to the designed location of the points, one after another. However, due to the deviations between the designed and the actual location of the points, the aiming need to be adjusted. An automated dynamic image-based look-and-move type servoing architecture is proposed to be used for this task. After a successful fine aiming, the actual coordinates of the point in question can be automatically measured by using the measuring functionalities of the tacheometer. The approach was validated experimentally and noted to be feasible. On average 97 % of the points actually measured in four different shipbuilding measurement cases were indeed proposed to be vital points by the automated extraction algorithm. The accuracy of the results obtained with the automatic control method of the tachoemeter were comparable to the results obtained with the manual control, and also the reliability of the image processing step of the method was found to be high in the laboratory experiments.

  19. On the influence of airfoil deviations on the aerodynamic performance of wind turbine rotors

    NASA Astrophysics Data System (ADS)

    Winstroth, J.; Seume, J. R.

    2016-09-01

    The manufacture of large wind turbine rotor blades is a difficult task that still involves a certain degree of manual labor. Due to the complexity, airfoil deviations between the design airfoils and the manufactured blade are certain to arise. Presently, the understanding of the impact of manufacturing uncertainties on the aerodynamic performance is still incomplete. The present work analyzes the influence of a series of airfoil deviations likely to occur during manufacturing by means of Computational Fluid Dynamics and the aeroelastic code FAST. The average power production of the NREL 5MW wind turbine is used to evaluate the different airfoil deviations. Analyzed deviations include: Mold tilt towards the leading and trailing edge, thick bond lines, thick bond lines with cantilever correction, backward facing steps and airfoil waviness. The most severe influences are observed for mold tilt towards the leading and thick bond lines. By applying the cantilever correction, the influence of thick bond lines is almost compensated. Airfoil waviness is very dependent on amplitude height and the location along the surface of the airfoil. Increased influence is observed for backward facing steps, once they are high enough to trigger boundary layer transition close to the leading edge.

  20. Models of Lift and Drag Coefficients of Stalled and Unstalled Airfoils in Wind Turbines and Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    2008-01-01

    Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.

  1. A Practical and Automated Approach to Large Area Forest Disturbance Mapping with Remote Sensing

    PubMed Central

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions. PMID:24717283

  2. A practical and automated approach to large area forest disturbance mapping with remote sensing.

    PubMed

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions.

  3. Efficiency of thin magnetically arrested discs around black holes

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; McKinney, Jonathan C.; Reynolds, Christopher S.

    2016-10-01

    The radiative and jet efficiencies of thin magnetized accretion discs around black holes (BHs) are affected by BH spin and the presence of a magnetic field that, when strong, could lead to large deviations from Novikov-Thorne (NT) thin disc theory. To seek the maximum deviations, we perform general relativistic magnetohydrodynamic simulations of radiatively efficient thin (half-height H to radius R of H/R ≈ 0.10) discs around moderately rotating BHs with a/M = 0.5. First, our simulations, each evolved for more than 70 000 rg/c (gravitational radius rg and speed of light c), show that large-scale magnetic field readily accretes inward even through our thin disc and builds-up to the magnetically arrested disc (MAD) state. Secondly, our simulations of thin MADs show the disc achieves a radiative efficiency of ηr ≈ 15 per cent (after estimating photon capture), which is about twice the NT value of ηr ˜ 8 per cent for a/M = 0.5 and gives the same luminosity as an NT disc with a/M ≈ 0.9. Compared to prior simulations with ≲10 per cent deviations, our result of an ≈80 per cent deviation sets a new benchmark. Building on prior work, we are now able to complete an important scaling law which suggests that observed jet quenching in the high-soft state in BH X-ray binaries is consistent with an ever-present MAD state with a weak yet sustained jet.

  4. Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.

    PubMed

    Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman

    2013-02-01

    This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.

  5. Ku-band radar threshold analysis

    NASA Technical Reports Server (NTRS)

    Weber, C. L.; Polydoros, A.

    1979-01-01

    The statistics of the CFAR threshold for the Ku-band radar was determined. Exact analytical results were developed for both the mean and standard deviations in the designated search mode. The mean value is compared to the results of a previously reported simulation. The analytical results are more optimistic than the simulation results, for which no explanation is offered. The normalized standard deviation is shown to be very sensitive to signal-to-noise ratio and very insensitive to the noise correlation present in the range gates of the designated search mode. The substantial variation in the CFAR threshold is dominant at large values of SNR where the normalized standard deviation is greater than 0.3. Whether or not this significantly affects the resulting probability of detection is a matter which deserves additional attention.

  6. Heterogeneity-induced large deviations in activity and (in some cases) entropy production

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.

    2014-10-01

    We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.

  7. Rapidly rotating neutron stars with a massive scalar field—structure and universal relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doneva, Daniela D.; Yazadjiev, Stoytcho S., E-mail: daniela.doneva@uni-tuebingen.de, E-mail: yazad@phys.uni-sofia.bg

    We construct rapidly rotating neutron star models in scalar-tensor theories with a massive scalar field. The fact that the scalar field has nonzero mass leads to very interesting results since the allowed range of values of the coupling parameters is significantly broadened. Deviations from pure general relativity can be very large for values of the parameters that are in agreement with the observations. We found that the rapid rotation can magnify the differences several times compared to the static case. The universal relations between the normalized moment of inertia and quadrupole moment are also investigated both for the slowly andmore » rapidly rotating cases. The results show that these relations are still EOS independent up to a large extend and the deviations from pure general relativity can be large. This places the massive scalar-tensor theories amongst the few alternative theories of gravity that can be tested via the universal I -Love- Q relations.« less

  8. WKB theory of large deviations in stochastic populations

    NASA Astrophysics Data System (ADS)

    Assaf, Michael; Meerson, Baruch

    2017-06-01

    Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.

  9. Multifractal Analysis for Nutritional Assessment

    PubMed Central

    Park, Youngja; Lee, Kichun; Ziegler, Thomas R.; Martin, Greg S.; Hebbar, Gautam; Vidakovic, Brani; Jones, Dean P.

    2013-01-01

    The concept of multifractality is currently used to describe self-similar and complex scaling properties observed in numerous biological signals. Fractals are geometric objects or dynamic variations which exhibit some degree of similarity (irregularity) to the original object in a wide range of scales. This approach determines irregularity of biologic signal as an indicator of adaptability, the capability to respond to unpredictable stress, and health. In the present work, we propose the application of multifractal analysis of wavelet-transformed proton nuclear magnetic resonance (1H NMR) spectra of plasma to determine nutritional insufficiency. For validation of this method on 1H NMR signal of human plasma, standard deviation from classical statistical approach and Hurst exponent (H), left slope and partition function from multifractal analysis were extracted from 1H NMR spectra to test whether multifractal indices could discriminate healthy subjects from unhealthy, intensive care unit patients. After validation, the multifractal approach was applied to spectra of plasma from a modified crossover study of sulfur amino acid insufficiency and tested for associations with blood lipids. The results showed that standard deviation and H, but not left slope, were significantly different for sulfur amino acid sufficiency and insufficiency. Quadratic discriminant analysis of H, left slope and the partition function showed 78% overall classification accuracy according to sulfur amino acid status. Triglycerides and apolipoprotein C3 were significantly correlated with a multifractal model containing H, left slope, and standard deviation, and cholesterol and high-sensitivity C-reactive protein were significantly correlated to H. In conclusion, multifractal analysis of 1H NMR spectra provides a new approach to characterize nutritional status. PMID:23990878

  10. Deviation characteristics of specular reflectivity of micro-rough surface from Fresnel's equation

    NASA Astrophysics Data System (ADS)

    Zhang, W. J.; Qiu, J.; Liu, L. H.

    2015-07-01

    Specular reflectivity is an important radiative property in thermal engineering applications and reflection-based optical constant determinations, yet it will be influenced by surface micro-roughness which cannot be completely removed during the polishing process. In this work, we examined the deviation characteristics of the specular reflectivity of micro-rough surfaces from that predicted by the Fresnel's equation under the assumption of smooth surface. The effects of incident angle and relative roughness were numerically investigated for both 1D and 2D micro randomly rough surfaces using full wave analysis under the condition that the relative roughness is smaller than 0.05. For transverse magnetic (TM) wave incidence, it is observed that the deviation of specular reflectivity dramatically rises as the incident angle approaches to the pseudo Brewster's angle, which violates the prediction based on Rayleigh criterion. While for the transverse electric (TE) wave incidence, the deviation of the specular reflectivity is much smaller and decreases monotonically with the increase of incident angle, which agrees with the predication from Rayleigh criterion. Generally, the deviation of specular reflectivity for both TM and TE increases with the relative roughness as commonly expected.

  11. Radial gradient and radial deviation radiomic features from pre-surgical CT scans are associated with survival among lung adenocarcinoma patients.

    PubMed

    Tunali, Ilke; Stringfield, Olya; Guvenis, Albert; Wang, Hua; Liu, Ying; Balagurunathan, Yoganand; Lambin, Philippe; Gillies, Robert J; Schabath, Matthew B

    2017-11-10

    The goal of this study was to extract features from radial deviation and radial gradient maps which were derived from thoracic CT scans of patients diagnosed with lung adenocarcinoma and assess whether these features are associated with overall survival. We used two independent cohorts from different institutions for training (n= 61) and test (n= 47) and focused our analyses on features that were non-redundant and highly reproducible. To reduce the number of features and covariates into a single parsimonious model, a backward elimination approach was applied. Out of 48 features that were extracted, 31 were eliminated because they were not reproducible or were redundant. We considered 17 features for statistical analysis and identified a final model containing the two most highly informative features that were associated with lung cancer survival. One of the two features, radial deviation outside-border separation standard deviation, was replicated in a test cohort exhibiting a statistically significant association with lung cancer survival (multivariable hazard ratio = 0.40; 95% confidence interval 0.17-0.97). Additionally, we explored the biological underpinnings of these features and found radial gradient and radial deviation image features were significantly associated with semantic radiological features.

  12. Elastic geobarometry: uncertainties arising from the geometry of the host-inclusion system

    NASA Astrophysics Data System (ADS)

    Mazzucchelli, Mattia L.; Burnley, Pamela; Angel, Ross J.; Chiara Domeneghetti, M.; Nestola, Fabrizio; Alvaro, Matteo

    2017-04-01

    Ultra-high-pressure metamorphic (UHPM) rocks are the only rocks that can provide insights into the detailed processes of deep and ultra-deep subduction. The application of conventional geobarometry to these rocks can be extremely challenging. Elastic geobarometry is an alternative and complementary method independent of chemistry and chemical equilibria. Minerals trapped as inclusions within other host minerals develop residual pressure (Pinc) on exhumation as a result of the differences between the thermo-elastic properties of the host and the inclusion. If correctly interpreted, measurement of the Pinc allows for a good estimate of the entrapment pressure. The solution for isotropic non-linear elasticity has been recently incorporated into the classic host-inclusion model [1; 2] and is now available in the EoSFit7c software [3]. However, this solution assumes a simple geometry for the host inclusion system with a small spherical inclusion located at the center of an infinite host. To verify the results of the analytical solution and to extend the analysis beyond the existing geometrical assumptions we performed numerical calculations using Finite Element Modelling (FEM). This approach has allowed us to evaluate the deviation from the pressure calculated with the isotropic solution if applied to real host-inclusion systems where the geometry is far from ideal, for example when the inclusion is not small, not at the center of the host and not spherical. In order to determine the effects of shape alone, we performed calculations with isotropic elasticity. Our results show that the deviations from the analytical solution arising from the geometry of the system are smaller than 1% if a spherical inclusion has a radius smaller than 1/4 of that of the host and is located at more than two inclusion radii from the external surface of the host. Deviations produced by changes in the shape of the inclusions include two contributions. First, the effect of edges and corners is small and introduces deviations of less than 2%. Second, the aspect ratio of the inclusion gives rise to large deviations in Pinc with shifts in the calculated pressures of more than 10% for platy inclusions (i.e. aspect ratio 1:5:5). The exact effect on Pinc is a complex function of both the values of the bulk and shear moduli of both host and inclusion, and the contrast in these values. For a soft quartz-like inclusion, the influence of the aspect ratio and of the presence of edges and corners becomes greater as the host is made softer and approaches the bulk modulus of the inclusion, provided a contrast in shear moduli remains. These deviations from the analytical solution induced by the shape are smaller than 1% only when inclusions are approximately spherical (i.e. ellipsoids with aspect ratios of less than 1:2:2) and the host is significantly stiffer than the inclusion. This work is supported by MIUR-SIR grant "MILE DEEp" (RBSI140351) to M. Alvaro, and ERC starting grant 307322 to F. Nestola. References: [1] Angel, R.J et al. (2014a) Am Mineral,99, 2146-2149 [2] Angel R.J et al. (2015) J. Metamorph. Geol.33, 801-813. [3] Angel RJ et al. (2014b) Z Kristallogr,229, 405-419.

  13. Modeling the viscosity of polydisperse suspensions: Improvements in prediction of limiting behavior

    NASA Astrophysics Data System (ADS)

    Mwasame, Paul M.; Wagner, Norman J.; Beris, Antony N.

    2016-06-01

    The present study develops a fully consistent extension of the approach pioneered by Farris ["Prediction of the viscosity of multimodal suspensions from unimodal viscosity data," Trans. Soc. Rheol. 12, 281-301 (1968)] to describe the viscosity of polydisperse suspensions significantly improving upon our previous model [P. M. Mwasame, N. J. Wagner, and A. N. Beris, "Modeling the effects of polydispersity on the viscosity of noncolloidal hard sphere suspensions," J. Rheol. 60, 225-240 (2016)]. The new model captures the Farris limit of large size differences between consecutive particle size classes in a suspension. Moreover, the new model includes a further generalization that enables its application to real, complex suspensions that deviate from ideal non-colloidal suspension behavior. The capability of the new model to predict the viscosity of complex suspensions is illustrated by comparison against experimental data.

  14. Quantifying social development in autism.

    PubMed

    Volkmar, F R; Carter, A; Sparrow, S S; Cicchetti, D V

    1993-05-01

    This study was concerned with the development of quantitative measures of social development in autism. Multiple regression equations predicting social, communicative, and daily living skills on the Vineland Adaptive Behavior Scales were derived from a large, normative sample and applied to groups of autistic and nonautistic, developmentally disordered children. Predictive models included either mental or chronological age and other relevant variables. Social skills in the autistic group were more than two standard deviations below those predicted by their mental age; an index derived from the ratio of actual to predicted social skills correctly classified 94% of the autistic and 92% of the nonautistic, developmentally disordered cases. The findings are consistent with the idea that social disturbance is central in the definition of autism. The approach used in this study has potential advantages for providing more precise measures of social development in autism.

  15. Search for a new resonance decaying to a W or Z boson and a Higgs boson in the ℓℓ/ℓν/νν + bb¯ final states with the ATLAS detector

    DOE PAGES

    Aad, G.

    2015-06-16

    A search for a new resonance decaying to a W or Z boson and a Higgs boson in the ℓℓ/ℓν/νν+bb¯ final states is performed using 20.3 fb -1 of pp collision data recorded at √s = 8 TeV with the ATLAS detector at the Large Hadron Collider. The search is conducted by examining the WH / ZH invariant mass distribution for a localized excess. Thus, no significant deviation from the Standard Model background prediction is observed. The results are interpreted in terms of constraints on the Minimal Walking Technicolor model and on a simplified approach based on a phenomenological Lagrangianmore » of Heavy Vector Triplets.« less

  16. Space based observations: A state of the art solution for spatial monitoring tropical forested watershed productivity at regional scale in developing countries

    NASA Astrophysics Data System (ADS)

    Mahmud, M. R.

    2014-02-01

    This paper presents the simplified and operational approach of mapping the water yield in tropical watershed using space-based multi sensor remote sensing data. Two main critical hydrological rainfall variables namely rainfall and evapotranspiration are being estimated by satellite measurement and reinforce the famous Thornthwaite & Mather water balance model. The satellite rainfall and ET estimates were able to represent the actual value on the ground with accuracy under considerable conditions. The satellite derived water yield had good agreement and relation with actual streamflow. A high bias measurement may result due to; i) influence of satellite rainfall estimates during heavy storm, and ii) large uncertainties and standard deviation of MODIS temperature data product. The output of this study managed to improve the regional scale of hydrology assessment in Peninsular Malaysia.

  17. Inefficiency in Latin-American market indices

    NASA Astrophysics Data System (ADS)

    Zunino, L.; Tabak, B. M.; Pérez, D. G.; Garavaglia, M.; Rosso, O. A.

    2007-11-01

    We explore the deviations from efficiency in the returns and volatility returns of Latin-American market indices. Two different approaches are considered. The dynamics of the Hurst exponent is obtained via a wavelet rolling sample approach, quantifying the degree of long memory exhibited by the stock market indices under analysis. On the other hand, the Tsallis q entropic index is measured in order to take into account the deviations from the Gaussian hypothesis. Different dynamic rankings of inefficieny are obtained, each of them contemplates a different source of inefficiency. Comparing with the results obtained for a developed country (US), we confirm a similar degree of long-range dependence for our emerging markets. Moreover, we show that the inefficiency in the Latin-American countries comes principally from the non-Gaussian form of the probability distributions.

  18. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    NASA Astrophysics Data System (ADS)

    Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven

    2016-05-01

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.

  19. Electron transfer statistics and thermal fluctuations in molecular junctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goswami, Himangshu Prabal; Harbola, Upendra

    2015-02-28

    We derive analytical expressions for probability distribution function (PDF) for electron transport in a simple model of quantum junction in presence of thermal fluctuations. Our approach is based on the large deviation theory combined with the generating function method. For large number of electrons transferred, the PDF is found to decay exponentially in the tails with different rates due to applied bias. This asymmetry in the PDF is related to the fluctuation theorem. Statistics of fluctuations are analyzed in terms of the Fano factor. Thermal fluctuations play a quantitative role in determining the statistics of electron transfer; they tend tomore » suppress the average current while enhancing the fluctuations in particle transfer. This gives rise to both bunching and antibunching phenomena as determined by the Fano factor. The thermal fluctuations and shot noise compete with each other and determine the net (effective) statistics of particle transfer. Exact analytical expression is obtained for delay time distribution. The optimal values of the delay time between successive electron transfers can be lowered below the corresponding shot noise values by tuning the thermal effects.« less

  20. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  1. Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens

    NASA Astrophysics Data System (ADS)

    Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl

    2016-01-01

    As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.

  2. Volume weighting the measure of the universe from classical slow-roll expansion

    NASA Astrophysics Data System (ADS)

    Sloan, David; Silk, Joseph

    2016-05-01

    One of the most frustrating issues in early universe cosmology centers on how to reconcile the vast choice of universes in string theory and in its most plausible high energy sibling, eternal inflation, which jointly generate the string landscape with the fine-tuned and hence relatively small number of universes that have undergone a large expansion and can accommodate observers and, in particular, galaxies. We show that such observations are highly favored for any system whereby physical parameters are distributed at a high energy scale, due to the conservation of the Liouville measure and the gauge nature of volume, asymptotically approaching a period of large isotropic expansion characterized by w =-1 . Our interpretation predicts that all observational probes for deviations from w =-1 in the foreseeable future are doomed to failure. The purpose of this paper is not to introduce a new measure for the multiverse, but rather to show how what is perhaps the most natural and well-known measure, volume weighting, arises as a consequence of the conservation of the Liouville measure on phase space during the classical slow-roll expansion.

  3. Fluctuating observation time ensembles in the thermodynamics of trajectories

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.; Turner, Robert M.; Garrahan, Juan P.

    2014-03-01

    The dynamics of stochastic systems, both classical and quantum, can be studied by analysing the statistical properties of dynamical trajectories. The properties of ensembles of such trajectories for long, but fixed, times are described by large-deviation (LD) rate functions. These LD functions play the role of dynamical free energies: they are cumulant generating functions for time-integrated observables, and their analytic structure encodes dynamical phase behaviour. This ‘thermodynamics of trajectories’ approach is to trajectories and dynamics what the equilibrium ensemble method of statistical mechanics is to configurations and statics. Here we show that, just like in the static case, there are a variety of alternative ensembles of trajectories, each defined by their global constraints, with that of trajectories of fixed total time being just one of these. We show how the LD functions that describe an ensemble of trajectories where some time-extensive quantity is constant (and large) but where total observation time fluctuates can be mapped to those of the fixed-time ensemble. We discuss how the correspondence between generalized ensembles can be exploited in path sampling schemes for generating rare dynamical trajectories.

  4. Predicting plasmonic coupling with Mie-Gans theory in silver nanoparticle arrays

    NASA Astrophysics Data System (ADS)

    Ranjan, M.

    2013-09-01

    Plasmonic coupling is observed in the self-aligned arrays of silver nanoparticles grown on ripple-patterned substrate. Large differences observed in the plasmon resonance wavelength, measured and calculated using Mie-Gans theory, predict that strong plasmonic coupling exists in the nanoparticles arrays. Even though plasmonic coupling exists both along and across the arrays, but it is found to be much stronger along the arrays due to shorter interparticle gap and particle elongation. This effect is responsible for observed optical anisotropy in such arrays. Measured red-shift even in the transverse plasmon resonance mode with the increasing nanoparticles aspect ratio in the arrays, deviate from the prediction of Mie-Gans theory. This essentially means that plasmonic coupling is dominating over the shape anisotropy. Plasmon resonance tuning is presented by varying the plasmonic coupling systematically with nanoparticles aspect ratio and ripple wavelength. Plasmon resonance red-shifts with the increasing aspect ratio along the ripple, and blue-shifts with the increasing ripple wavelength across the ripple. Therefore, reported bottom-up approach for fabricating large area-coupled nanoparticle arrays can be used for various field enhancement-based plasmonic applications.

  5. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE PAGES

    Yoo, Wucherl; Sim, Alex

    2016-06-24

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  6. Supernova Cosmology Without Spectroscopy

    NASA Astrophysics Data System (ADS)

    Johnson, Elizabeth; Scolnic, Daniel; Kessler, Rick; Rykoff, Eli; Rozo, Eduardo

    2018-01-01

    Present and future supernovae (SN) surveys face several challenges: the ability to acquire redshifts of either the SN or its host galaxy, the ability to classify a SN without a spectrum, and unknown relations between SN luminosity and host galaxy type. We present here a new approach that addresses these challenges. From the large sample of SNe discovered and measured by the Dark Energy Survey (DES), we cull the sample to only supernovae (SNe) located in luminous red galaxies (LRGs). For these galaxies, photometric redshift estimates are expected to be accurate to a standard deviation of 0.02x(1+z). In addition, only Type Ia Supernovae are expected to exist in these galaxies, thereby providing a pure SNIa sample. Furthermore, we can combine this high-redshift sample with a low-redshift SN sample of only SNe located in LRGs, thereby producing a sample that is less sensitive to host galaxy relations because the host galaxy demographic is consistent across the redshift range. We find that the current DES sample has ~250 SNe in LRGs, a similar amount to current SNIa samples used to measure cosmological parameters. We present our method to produce a photometric-only Hubble diagram and measure cosmological parameters. Finally, we discuss systematic uncertainties from this approach, and forecast constraints from this method for LSST, which should have a sample roughly 200 times as large.

  7. A Monte Carlo Approach to Unidimensionality Testing in Polytomous Rasch Models

    ERIC Educational Resources Information Center

    Christensen, Karl Bang; Kreiner, Svend

    2007-01-01

    Many statistical tests are designed to test the different assumptions of the Rasch model, but only few are directed at detecting multidimensionality. The Martin-Lof test is an attractive approach, the disadvantage being that its null distribution deviates strongly from the asymptotic chi-square distribution for most realistic sample sizes. A Monte…

  8. [Quantitative measures for assessing the functional state of the human body during diagnostic procedure].

    PubMed

    Artemenko, M V

    2008-01-01

    Two approaches to calculation of the qualitative measures for assessing the functional state level of human body are considered. These approaches are based on image and fuzzy set recognition theories and are used to construct diagnostic decision rules. The first approach uses the data on deviation of detected parameters from those for healthy persons; the second approach analyzes the degree of deviation of detected parameters from the approximants characterizing the correlation differences between the parameters. A method for synthesis of decision rules and the results of blood count-based research for a number of diseases (hemophilia, thrombocytopathy, hypertension, arrhythmia, hepatic cirrhosis, trichophytia) are considered. An effect of a change in the functional link between the cholesterol content in blood and the relative rate of variation of AST and ALT enzymes in blood from direct proportional (healthy state) to inverse proportional (hepatic cirrhosis) is discussed. It is shown that analysis of correlation changes in detected parameters of the human body state during diagnostic process is more effective for application in decision support systems than the state space analysis.

  9. Symmetry analysis of talus bone: A Geometric morphometric approach.

    PubMed

    Islam, K; Dobbe, A; Komeili, A; Duke, K; El-Rich, M; Dhillon, S; Adeeb, S; Jomha, N M

    2014-01-01

    The main object of this study was to use a geometric morphometric approach to quantify the left-right symmetry of talus bones. Analysis was carried out using CT scan images of 11 pairs of intact tali. Two important geometric parameters, volume and surface area, were quantified for left and right talus bones. The geometric shape variations between the right and left talus bones were also measured using deviation analysis. Furthermore, location of asymmetry in the geometric shapes were identified. Numerical results showed that talus bones are bilaterally symmetrical in nature, and the difference between the surface area of the left and right talus bones was less than 7.5%. Similarly, the difference in the volume of both bones was less than 7.5%. Results of the three-dimensional (3D) deviation analyses demonstrated the mean deviation between left and right talus bones were in the range of -0.74 mm to 0.62 mm. It was observed that in eight of 11 subjects, the deviation in symmetry occurred in regions that are clinically less important during talus surgery. We conclude that left and right talus bones of intact human ankle joints show a strong degree of symmetry. The results of this study may have significance with respect to talus surgery, and in investigating traumatic talus injury where the geometric shape of the contralateral talus can be used as control. Cite this article: Bone Joint Res 2014;3:139-45.

  10. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams.

    PubMed

    Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An

    2017-11-08

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  11. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams

    PubMed Central

    Gao, Lili

    2017-01-01

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations. PMID:29117096

  12. Towards a complete physically based forecast model for underwater noise related to impact pile driving.

    PubMed

    Fricke, Moritz B; Rolfes, Raimund

    2015-03-01

    An approach for the prediction of underwater noise caused by impact pile driving is described and validated based on in situ measurements. The model is divided into three sub-models. The first sub-model, based on the finite element method, is used to describe the vibration of the pile and the resulting acoustic radiation into the surrounding water and soil column. The mechanical excitation of the pile by the piling hammer is estimated by the second sub-model using an analytical approach which takes the large vertical dimension of the ram into account. The third sub-model is based on the split-step Padé solution of the parabolic equation and targets the long-range propagation up to 20 km. In order to presume realistic environmental properties for the validation, a geoacoustic model is derived from spatially averaged geological information about the investigation area. Although it can be concluded from the validation that the model and the underlying assumptions are appropriate, there are some deviations between modeled and measured results. Possible explanations for the observed errors are discussed.

  13. Extreme events in a vortex gas simulation of a turbulent half-jet

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Saikishan; Pathikonda, Gokul; Narasimha, Roddam

    2012-11-01

    Extensive simulations [arXiv:1008.2876v1 [physics.flu-dyn], BAPS.2010.DFD.LE.4] have shown that the temporally evolving vortex gas mixing layer has 3 regimes, including one which has a universal spreading rate. The present study explores the development of spatially evolving mixing layers, using a vortex gas model based on Basu et al. (1995 Appl. Math. Modelling). The effects of the velocity ratio (r) are analyzed via the most extensive simulations of this kind till date, involving up to 10000 vortices and averaging over up to 1000 convective times. While the temporal limit is approached as r approaches unity, striking features such as extreme events involving coherent structures, bending, deviation of the convection velocity from mean velocity, spatial feedback and greater sensitivity to downstream and free stream boundary conditions are observed in the half-jet (r = 0) limit. A detailed statistical analysis reveals possible causes for the large scatter across experiments, as opposed to the commonly adopted explanation of asymptotic dependence on initial conditions. Supported in part by contract no. Intel/RN/4288.

  14. Improving long time behavior of Poisson bracket mapping equation: A non-Hamiltonian approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hyun Woo; Rhee, Young Min, E-mail: ymrhee@postech.ac.kr

    2014-05-14

    Understanding nonadiabatic dynamics in complex systems is a challenging subject. A series of semiclassical approaches have been proposed to tackle the problem in various settings. The Poisson bracket mapping equation (PBME) utilizes a partial Wigner transform and a mapping representation for its formulation, and has been developed to describe nonadiabatic processes in an efficient manner. Operationally, it is expressed as a set of Hamilton's equations of motion, similar to more conventional classical molecular dynamics. However, this original Hamiltonian PBME sometimes suffers from a large deviation in accuracy especially in the long time limit. Here, we propose a non-Hamiltonian variant ofmore » PBME to improve its behavior especially in that limit. As a benchmark, we simulate spin-boson and photosynthetic model systems and find that it consistently outperforms the original PBME and its Ehrenfest style variant. We explain the source of this improvement by decomposing the components of the mapping Hamiltonian and by assessing the energy flow between the system and the bath. We discuss strengths and weaknesses of our scheme with a viewpoint of offering future prospects.« less

  15. Performance of an Optimally Tuned Range-Separated Hybrid Functional for 0-0 Electronic Excitation Energies.

    PubMed

    Jacquemin, Denis; Moore, Barry; Planchat, Aurélien; Adamo, Carlo; Autschbach, Jochen

    2014-04-08

    Using a set of 40 conjugated molecules, we assess the performance of an "optimally tuned" range-separated hybrid functional in reproducing the experimental 0-0 energies. The selected protocol accounts for the impact of solvation using a corrected linear-response continuum approach and vibrational corrections through calculations of the zero-point energies of both ground and excited-states and provides basis set converged data thanks to the systematic use of diffuse-containing atomic basis sets at all computational steps. It turns out that an optimally tuned long-range corrected hybrid form of the Perdew-Burke-Ernzerhof functional, LC-PBE*, delivers both the smallest mean absolute error (0.20 eV) and standard deviation (0.15 eV) of all tested approaches, while the obtained correlation (0.93) is large but remains slightly smaller than its M06-2X counterpart (0.95). In addition, the efficiency of two other recently developed exchange-correlation functionals, namely SOGGA11-X and ωB97X-D, has been determined in order to allow more complete comparisons with previously published data.

  16. Inherent Structure versus Geometric Metric for State Space Discretization

    PubMed Central

    Liu, Hanzhong; Li, Minghai; Fan, Jue; Huo, Shuanghong

    2016-01-01

    Inherent structure (IS) and geometry-based clustering methods are commonly used for analyzing molecular dynamics trajectories. ISs are obtained by minimizing the sampled conformations into local minima on potential/effective energy surface. The conformations that are minimized into the same energy basin belong to one cluster. We investigate the influence of the applications of these two methods of trajectory decomposition on our understanding of the thermodynamics and kinetics of alanine tetrapeptide. We find that at the micro cluster level, the IS approach and root-mean-square deviation (RMSD) based clustering method give totally different results. Depending on the local features of energy landscape, the conformations with close RMSDs can be minimized into different minima, while the conformations with large RMSDs could be minimized into the same basin. However, the relaxation timescales calculated based on the transition matrices built from the micro clusters are similar. The discrepancy at the micro cluster level leads to different macro clusters. Although the dynamic models established through both clustering methods are validated approximately Markovian, the IS approach seems to give a meaningful state space discretization at the macro cluster level. PMID:26915811

  17. A Quantitative Evaluation of the Flipped Classroom in a Large Lecture Principles of Economics Course

    ERIC Educational Resources Information Center

    Balaban, Rita A.; Gilleskie, Donna B.; Tran, Uyen

    2016-01-01

    This research provides evidence that the flipped classroom instructional format increases student final exam performance, relative to the traditional instructional format, in a large lecture principles of economics course. The authors find that the flipped classroom directly improves performance by 0.2 to 0.7 standardized deviations, depending on…

  18. One-side forward-backward asymmetry in top quark pair production at the CERN Large Hadron Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Youkai; Xiao Bo; Zhu Shouhua

    2010-11-01

    Both D0 and CDF at Tevatron reported the measurements of forward-backward asymmetry in top pair production, which showed possible deviation from the standard model QCD prediction. In this paper, we explore how to examine the same higher-order QCD effects at the more powerful Large Hadron Collider.

  19. Nonlinear Elastic Effects on the Energy Flux Deviation of Ultrasonic Waves in GR/EP Composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1992-01-01

    In isotropic materials, the direction of the energy flux (energy per unit time per unit area) of an ultrasonic plane wave is always along the same direction as the normal to the wave front. In anisotropic materials, however, this is true only along symmetry directions. Along other directions, the energy flux of the wave deviates from the intended direction of propagation. This phenomenon is known as energy flux deviation and is illustrated. The direction of the energy flux is dependent on the elastic coefficients of the material. This effect has been demonstrated in many anisotropic crystalline materials. In transparent quartz crystals, Schlieren photographs have been obtained which allow visualization of the ultrasonic waves and the energy flux deviation. The energy flux deviation in graphite/epoxy (gr/ep) composite materials can be quite large because of their high anisotropy. The flux deviation angle has been calculated for unidirectional gr/ep composites as a function of both fiber orientation and fiber volume content. Experimental measurements have also been made in unidirectional composites. It has been further demonstrated that changes in composite materials which alter the elastic properties such as moisture absorption by the matrix or fiber degradation, can be detected nondestructively by measurements of the energy flux shift. In this research, the effects of nonlinear elasticity on energy flux deviation in unidirectional gr/ep composites were studied. Because of elastic nonlinearity, the angle of the energy flux deviation was shown to be a function of applied stress. This shift in flux deviation was modeled using acoustoelastic theory and the previously measured second and third order elastic stiffness coefficients for T300/5208 gr/ep. Two conditions of applied uniaxial stress were considered. In the first case, the direction of applied uniaxial stress was along the fiber axis (x3) while in the second case it was perpendicular to the fiber axis along the laminate stacking direction (x1).

  20. Detecting long-duration cloud contamination in hyper-temporal NDVI imagery

    NASA Astrophysics Data System (ADS)

    Ali, Amjad; de Bie, C. A. J. M.; Skidmore, A. K.

    2013-10-01

    Cloud contamination impacts on the quality of hyper-temporal NDVI imagery and its subsequent interpretation. Short-duration cloud impacts are easily removed by using quality flags and an upper envelope filter, but long-duration cloud contamination of NDVI imagery remains. In this paper, an approach that goes beyond the use of quality flags and upper envelope filtering is tested to detect when and where long-duration clouds are responsible for unreliable NDVI readings, so that a user can flag those data as missing. The study is based on MODIS Terra and the combined Terra-Aqua 16-day NDVI product for the south of Ghana, where persistent cloud cover occurs throughout the year. The combined product could be assumed to have less cloud contamination, since it is based on two images per day. Short-duration cloud effects were removed from the two products through using the adaptive Savitzky-Golay filter. Then for each 'cleaned' product an unsupervised classified map was prepared using the ISODATA algorithm, and, by class, plots were prepared to depict changes over time of the means and the standard deviations in NDVI values. By comparing plots of similar classes, long-duration cloud contamination appeared to display a decline in mean NDVI below the lower limit 95% confidence interval with a coinciding increase in standard deviation above the upper limit 95% confidence interval. Regression analysis was carried out per NDVI class in two randomly selected groups in order to statistically test standard deviation values related to long-duration cloud contamination. A decline in seasonal NDVI values (growing season) were below the lower limit of 95% confidence interval as well as a concurrent increase in standard deviation values above the upper limit of the 95% confidence interval were noted in 34 NDVI classes. The regression analysis results showed that differences in NDVI class values between the Terra and the Terra-Aqua imagery were significantly correlated (p < 0.05) with the corresponding standard deviation values of the Terra imagery in case of all NDVI classes of two selected NDVI groups. The method successfully detects long-duration cloud contamination that results in unreliable NDVI values. The approach offers scientists interested in time series analysis a method of masking by area (class) the periods when pre-cleaned NDVI values remain affected by clouds. The approach requires no additional data for execution purposes but involves unsupervised classification of the imagery to carry out the evaluation of class-specific mean NDVI and standard deviation values over time.

  1. Beyond δ: Tailoring marked statistics to reveal modified gravity

    NASA Astrophysics Data System (ADS)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models which attempt to explain the accelerated expansion of the universe through large-scale modifications to General Relativity (GR), must satisfy the stringent experimental constraints of GR in the solar system. Viable candidates invoke a “screening” mechanism, that dynamically suppresses deviations in high density environments, making their overall detection challenging even for ambitious future large-scale structure surveys. We present methods to efficiently simulate the non-linear properties of such theories, and consider how a series of statistics that reweight the density field to accentuate deviations from GR can be applied to enhance the overall signal-to-noise ratio in differentiating the models from GR. Our results demonstrate that the cosmic density field can yield additional, invaluable cosmological information, beyond the simple density power spectrum, that will enable surveys to more confidently discriminate between modified gravity models and ΛCDM.

  2. Linking clinical measurements and kinematic gait patterns of toe-walking using fuzzy decision trees.

    PubMed

    Armand, Stéphane; Watelain, Eric; Roux, Emmanuel; Mercier, Moïse; Lepoutre, François-Xavier

    2007-03-01

    Toe-walking is one of the most prevalent gait deviations and has been linked to many diseases. Three major ankle kinematic patterns have been identified in toe-walkers, but the relationships between the causes of toe-walking and these patterns remain unknown. This study aims to identify these relationships. Clearly, such knowledge would increase our understanding of this gait deviation, and could help clinicians plan treatment. The large quantity of data provided by gait analysis often makes interpretation a difficult task. Artificial intelligence techniques were used in this study to facilitate interpretation as well as to decrease subjective interpretation. Of the 716 limbs evaluated, 240 showed signs of toe-walking and met inclusion criteria. The ankle kinematic pattern of the evaluated limbs during gait was assigned to one of three toe-walking pattern groups to build the training data set. Toe-walker clinical measurements (range of movement, muscle spasticity and muscle strength) were coded in fuzzy modalities, and fuzzy decision trees were induced to create intelligible rules allowing toe-walkers to be assigned to one of the three groups. A stratified 10-fold cross validation situated the classification accuracy at 81%. Twelve rules depicting the causes of toe-walking were selected, discussed and characterized using kinematic, kinetic and EMG charts. This study proposes an original approach to linking the possible causes of toe-walking with gait patterns.

  3. Two-Component Structure of the Radio Source 0014+813 from VLBI Observations within the CONT14 Program

    NASA Astrophysics Data System (ADS)

    Titov, O. A.; Lopez, Yu. R.

    2018-03-01

    We consider a method of reconstructing the structure delay of extended radio sources without constructing their radio images. The residuals derived after the adjustment of geodetic VLBI observations are used for this purpose. We show that the simplest model of a radio source consisting of two point components can be represented by four parameters (the angular separation of the components, the mutual orientation relative to the poleward direction, the flux-density ratio, and the spectral index difference) that are determined for each baseline of a multi-baseline VLBI network. The efficiency of this approach is demonstrated by estimating the coordinates of the radio source 0014+813 observed during the two-week CONT14 program organized by the International VLBI Service (IVS) in May 2014. Large systematic deviations have been detected in the residuals of the observations for the radio source 0014+813. The averaged characteristics of the radio structure of 0014+813 at a frequency of 8.4 GHz can be calculated from these deviations. Our modeling using four parameters has confirmed that the source consists of two components at an angular separation of 0.5 mas in the north-south direction. Using the structure delay when adjusting the CONT14 observations leads to a correction of the average declination estimate for the radio source 0014+813 by 0.070 mas.

  4. Estimation of Radiative Efficiency of Chemicals with Potentially Significant Global Warming Potential

    EPA Pesticide Factsheets

    The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of

  5. Integrating geological uncertainty in long-term open pit mine production planning by ant colony optimization

    NASA Astrophysics Data System (ADS)

    Gilani, Seyed-Omid; Sattarvand, Javad

    2016-02-01

    Meeting production targets in terms of ore quantity and quality is critical for a successful mining operation. In-situ grade uncertainty causes both deviations from production targets and general financial deficits. A new stochastic optimization algorithm based on ant colony optimization (ACO) approach is developed herein to integrate geological uncertainty described through a series of the simulated ore bodies. Two different strategies were developed based on a single predefined probability value (Prob) and multiple probability values (Pro bnt) , respectively in order to improve the initial solutions that created by deterministic ACO procedure. Application at the Sungun copper mine in the northwest of Iran demonstrate the abilities of the stochastic approach to create a single schedule and control the risk of deviating from production targets over time and also increase the project value. A comparison between two strategies and traditional approach illustrates that the multiple probability strategy is able to produce better schedules, however, the single predefined probability is more practical in projects requiring of high flexibility degree.

  6. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  7. Using an In-House Approach to Computer-Assisted Design and Computer-Aided Manufacturing Reconstruction of the Maxilla.

    PubMed

    Numajiri, Toshiaki; Morita, Daiki; Nakamura, Hiroko; Tsujiko, Shoko; Yamochi, Ryo; Sowa, Yoshihiro; Toyoda, Kenichiro; Tsujikawa, Takahiro; Arai, Akihito; Yasuda, Makoto; Hirano, Shigeru

    2018-06-01

    Computer-assisted design (CAD) and computer-aided manufacturing (CAM) techniques are in widespread use for maxillofacial reconstruction. However, CAD/CAM surgical guides are commercially available only in limited areas. To use this technology in areas where these commercial guides are not available, the authors developed a CAD/CAM technique in which all processes are performed by the surgeon (in-house approach). The authors describe their experience and the characteristics of their in-house CAD/CAM reconstruction of the maxilla. This was a retrospective study of maxillary reconstruction with a free osteocutaneous flap. Free CAD software was used for virtual surgery and to design the cutting guides (maxilla and fibula), which were printed by a 3-dimensional printer. After the model surgery and pre-bending of the titanium plates, the actual reconstructions were performed. The authors compared the clinical information, preoperative plan, and postoperative reconstruction data. The reconstruction was judged as accurate if more than 80% of the reconstructed points were within a deviation of 2 mm. Although on-site adjustment was necessary in particular cases, all 4 reconstructions were judged as accurate. In total, 3 days were needed before the surgery for planning, printing, and pre-bending of plates. The average ischemic time was 134 minutes (flap suturing and bone fixation, 70 minutes; vascular anastomoses, 64 minutes). The mean deviation after reconstruction was 0.44 mm (standard deviation, 0.97). The deviations were 67.8% for 1 mm, 93.8% for 2 mm, and 98.6% for 3 mm. The disadvantages of the regular use of CAD/CAM reconstruction are the intraoperative changes in defect size and local tissue scarring. Good accuracy was obtained for CAD/CAM-guided reconstructions based on an in-house approach. The theoretical advantage of computer simulation contributes to the accuracy. An in-house approach could be an option for maxillary reconstruction. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Reverse engineering of machine-tool settings with modified roll for spiral bevel pinions

    NASA Astrophysics Data System (ADS)

    Liu, Guanglei; Chang, Kai; Liu, Zeliang

    2013-05-01

    Although a great deal of research has been dedicated to the synthesis of spiral bevel gears, little related to reverse engineering can be found. An approach is proposed to reverse the machine-tool settings of the pinion of a spiral bevel gear drive on the basis of the blank and tooth surface data obtained by a coordinate measuring machine(CMM). Real tooth contact analysis(RTCA) is performed to preliminary ascertain the contact pattern, the motion curve, as well as the position of the mean contact point. And then the tangent to the contact path and the motion curve are interpolated in the sense of the least square method to extract the initial values of the bias angle and the higher order coefficients(HOC) in modified roll motion. A trial tooth surface is generated by machine-tool settings derived from the local synthesis relating to the initial meshing performances and modified roll motion. An optimization objective is formed which equals the tooth surface deviation between the real tooth surface and the trial tooth surface. The design variables are the parameters describing the meshing performances at the mean contact point in addition to the HOC. When the objective is optimized within an arbitrarily given convergence tolerance, the machine-tool settings together with the HOC are obtained. The proposed approach is verified by a spiral bevel pinion used in the accessory gear box of an aviation engine. The trial tooth surfaces approach to the real tooth surface on the whole in the example. The results show that the convergent tooth surface deviation for the concave side on the average is less than 0.5 μm, and is less than 1.3 μm for the convex side. The biggest tooth surface deviation is 6.7 μm which is located at the corner of the grid on the convex side. Those nodes with relative bigger tooth surface deviations are all located at the boundary of the grid. An approach is proposed to figure out the machine-tool settings of a spiral bevel pinion by way of reverse engineering without having known the theoretical tooth surfaces and the corresponding machine-tool settings.

  9. Longitudinal and Cross-Sectional Analyses of Visual Field Progression in Participants of the Ocular Hypertension Treatment Study (OHTS)

    PubMed Central

    Chauhan, Balwantray C; Keltner, John L; Cello, Kim E; Johnson, Chris A; Anderson, Douglas R; Gordon, Mae O; Kass, Michael A

    2014-01-01

    Purpose Visual field progression can be determined by evaluating the visual field by serial examinations (longitudinal analysis), or by a change in classification derived from comparison to age-matched normal data in single examinations (cross-sectional analysis). We determined the agreement between these two approaches in data from the Ocular Hypertension Treatment Study (OHTS). Methods Visual field data from 3088 eyes of 1570 OHTS participants (median follow-up 7 yrs, 15 tests with static automated perimetry) were analysed. Longitudinal analyses were performed with change probability with total and pattern deviation, and cross-sectional analysis with Glaucoma Hemifield Test, Corrected Pattern Standard Deviation, and Mean Deviation. The rates of Mean Deviation and General Height change were compared to estimate the degree of diffuse loss in emerging glaucoma. Results The agreement on progression in longitudinal and cross-sectional analyses ranged from 50% to 61% and remained nearly constant across a wide range of criteria. In contrast, the agreement on absence of progression ranged from 97% to 99.7%, being highest for the stricter criteria. Analyses of pattern deviation were more conservative than total deviation, with a 3 to 5 times lesser incidence of progression. Most participants developing field loss had both diffuse and focal change. Conclusions Despite considerable overall agreement, between 40 to 50% of eyes identified as having progressed with either longitudinal or cross-sectional analyses were identified with only one of the analyses. Because diffuse change is part of early glaucomatous damage, pattern deviation analyses may underestimate progression in patients with ocular hypertension. PMID:21149774

  10. Open inflation in the landscape

    NASA Astrophysics Data System (ADS)

    Yamauchi, Daisuke; Linde, Andrei; Naruko, Atsushi; Sasaki, Misao; Tanaka, Takahiro

    2011-08-01

    The open inflation scenario is attracting a renewed interest in the context of the string landscape. Since there are a large number of metastable de Sitter vacua in the string landscape, tunneling transitions to lower metastable vacua through the bubble nucleation occur quite naturally, which leads to a natural realization of open inflation. Although the deviation of Ω0 from unity is small by the observational bound, we argue that the effect of this small deviation on the large-angle CMB anisotropies can be significant for tensor-type perturbation in the open inflation scenario. We consider the situation in which there is a large hierarchy between the energy scale of the quantum tunneling and that of the slow-roll inflation in the nucleated bubble. If the potential just after tunneling is steep enough, a rapid-roll phase appears before the slow-roll inflation. In this case the power spectrum is basically determined by the Hubble rate during the slow-roll inflation. On the other hand, if such a rapid-roll phase is absent, the power spectrum keeps the memory of the high energy density there in the large angular components. Furthermore, the amplitude of large angular components can be enhanced due to the effects of the wall fluctuation mode if the bubble wall tension is small. Therefore, although even the dominant quadrupole component is suppressed by the factor (1-Ω0)2, one can construct some models in which the deviation of Ω0 from unity is large enough to produce measurable effects. We also consider a more general class of models, where the false vacuum decay may occur due to Hawking-Moss tunneling, as well as the models involving more than one scalar field. We discuss scalar perturbations in these models and point out that a large set of such models is already ruled out by observational data, unless there was a very long stage of slow-roll inflation after the tunneling. These results show that observational data allow us to test various assumptions concerning the structure of the string theory potentials and the duration of the last stage of inflation.

  11. Integrating resource selection into spatial capture-recapture models for large carnivores

    USGS Publications Warehouse

    Proffitt, Kelly M.; Goldberg, Joshua; Hebblewite, Mark; Russell, Robin E.; Jimenez, Ben; Robinson, Hugh S.; Pilgrim, Kristine; Schwartz, Michael K.

    2015-01-01

    Wildlife managers need reliable methods to estimate large carnivore densities and population trends; yet large carnivores are elusive, difficult to detect, and occur at low densities making traditional approaches intractable. Recent advances in spatial capture-recapture (SCR) models have provided new approaches for monitoring trends in wildlife abundance and these methods are particularly applicable to large carnivores. We applied SCR models in a Bayesian framework to estimate mountain lion densities in the Bitterroot Mountains of west central Montana. We incorporate an existing resource selection function (RSF) as a density covariate to account for heterogeneity in habitat use across the study area and include data collected from harvested lions. We identify individuals through DNA samples collected by (1) biopsy darting mountain lions detected in systematic surveys of the study area, (2) opportunistically collecting hair and scat samples, and (3) sampling all harvested mountain lions. We included 80 DNA samples collected from 62 individuals in the analysis. Including information on predicted habitat use as a covariate on the distribution of activity centers reduced the median estimated density by 44%, the standard deviation by 7%, and the width of 95% credible intervals by 10% as compared to standard SCR models. Within the two management units of interest, we estimated a median mountain lion density of 4.5 mountain lions/100 km2 (95% CI = 2.9, 7.7) and 5.2 mountain lions/100 km2 (95% CI = 3.4, 9.1). Including harvested individuals (dead recovery) did not create a significant bias in the detection process by introducing individuals that could not be detected after removal. However, the dead recovery component of the model did have a substantial effect on results by increasing sample size. The ability to account for heterogeneity in habitat use provides a useful extension to SCR models, and will enhance the ability of wildlife managers to reliably and economically estimate density of wildlife populations, particularly large carnivores.

  12. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  13. Classification of California streams using combined deductive and inductive approaches: Setting the foundation for analysis of hydrologic alteration

    USGS Publications Warehouse

    Pyne, Matthew I.; Carlisle, Daren M.; Konrad, Christopher P.; Stein, Eric D.

    2017-01-01

    Regional classification of streams is an early step in the Ecological Limits of Hydrologic Alteration framework. Many stream classifications are based on an inductive approach using hydrologic data from minimally disturbed basins, but this approach may underrepresent streams from heavily disturbed basins or sparsely gaged arid regions. An alternative is a deductive approach, using watershed climate, land use, and geomorphology to classify streams, but this approach may miss important hydrological characteristics of streams. We classified all stream reaches in California using both approaches. First, we used Bayesian and hierarchical clustering to classify reaches according to watershed characteristics. Streams were clustered into seven classes according to elevation, sedimentary rock, and winter precipitation. Permutation-based analysis of variance and random forest analyses were used to determine which hydrologic variables best separate streams into their respective classes. Stream typology (i.e., the class that a stream reach is assigned to) is shaped mainly by patterns of high and mean flow behavior within the stream's landscape context. Additionally, random forest was used to determine which hydrologic variables best separate minimally disturbed reference streams from non-reference streams in each of the seven classes. In contrast to stream typology, deviation from reference conditions is more difficult to detect and is largely defined by changes in low-flow variables, average daily flow, and duration of flow. Our combined deductive/inductive approach allows us to estimate flow under minimally disturbed conditions based on the deductive analysis and compare to measured flow based on the inductive analysis in order to estimate hydrologic change.

  14. Assessing the Impact of Student Counseling Service Centres at Tertiary Education Institutions: How Should It Be Approached?

    ERIC Educational Resources Information Center

    Morrison, J. M.; Brand, H. J.; Cilliers, C. D.

    2006-01-01

    This article conceptually addresses the issue of assessing the impact of student counselling and development services in higher education institutions. It deviates from recent approaches which primarily examine the impact of selected interventions on specific indicators. In this article the question is asked whether the capacity to deliver the…

  15. Characterizing Accuracy and Precision of Glucose Sensors and Meters

    PubMed Central

    2014-01-01

    There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194

  16. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  17. Pair natural orbital and canonical coupled cluster reaction enthalpies involving light to heavy alkali and alkaline earth metals: the importance of sub-valence correlation.

    PubMed

    Minenkov, Yury; Bistoni, Giovanni; Riplinger, Christoph; Auer, Alexander A; Neese, Frank; Cavallo, Luigi

    2017-04-05

    In this work, we tested canonical and domain based pair natural orbital coupled cluster methods (CCSD(T) and DLPNO-CCSD(T), respectively) for a set of 32 ligand exchange and association/dissociation reaction enthalpies involving ionic complexes of Li, Be, Na, Mg, Ca, Sr, Ba and Pb(ii). Two strategies were investigated: in the former, only valence electrons were included in the correlation treatment, giving rise to the computationally very efficient FC (frozen core) approach; in the latter, all non-ECP electrons were included in the correlation treatment, giving rise to the AE (all electron) approach. Apart from reactions involving Li and Be, the FC approach resulted in non-homogeneous performance. The FC approach leads to very small errors (<2 kcal mol -1 ) for some reactions of Na, Mg, Ca, Sr, Ba and Pb, while for a few reactions of Ca and Ba deviations up to 40 kcal mol -1 have been obtained. Large errors are both due to artificial mixing of the core (sub-valence) orbitals of metals and the valence orbitals of oxygen and halogens in the molecular orbitals treated as core, and due to neglecting core-core and core-valence correlation effects. These large errors are reduced to a few kcal mol -1 if the AE approach is used or the sub-valence orbitals of metals are included in the correlation treatment. On the technical side, the CCSD(T) and DLPNO-CCSD(T) results differ by a fraction of kcal mol -1 , indicating the latter method as the perfect choice when the CPU efficiency is essential. For completely black-box applications, as requested in catalysis or thermochemical calculations, we recommend the DLPNO-CCSD(T) method with all electrons that are not covered by effective core potentials included in the correlation treatment and correlation-consistent polarized core valence basis sets of cc-pwCVQZ(-PP) quality.

  18. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  19. Development and flight test of a helicopter compact, portable, precision landing system concept

    NASA Technical Reports Server (NTRS)

    Clary, G. R.; Bull, J. S.; Davis, T. J.; Chisholm, J. P.

    1984-01-01

    An airborne, radar-based, precision approach concept is being developed and flight tested as a part of NASA's Rotorcraft All-Weather Operations Research Program. A transponder-based beacon landing system (BLS) applying state-of-the-art X-band radar technology and digital processing techniques, was built and is being flight tested to demonstrate the concept feasibility. The BLS airborne hardware consists of an add-on microprocessor, installed in conjunction with the aircraft weather/mapping radar, which analyzes the radar beacon receiver returns and determines range, localizer deviation, and glide-slope deviation. The ground station is an inexpensive, portable unit which can be quickly deployed at a landing site. Results from the flight test program show that the BLS concept has a significant potential for providing rotorcraft with low-cost, precision instrument approach capability in remote areas.

  20. The right hemisphere in esthetic perception.

    PubMed

    Bromberger, Bianca; Sternschein, Rebecca; Widick, Page; Smith, William; Chatterjee, Anjan

    2011-01-01

    Little about the neuropsychology of art perception and evaluation is known. Most neuropsychological approaches to art have focused on art production and have been anecdotal and qualitative. The field is in desperate need of quantitative methods if it is to advance. Here, we combine a quantitative approach to the assessment of art with modern voxel-lesion-symptom-mapping methods to determine brain-behavior relationships in art perception. We hypothesized that perception of different attributes of art are likely to be disrupted by damage to different regions of the brain. Twenty participants with right hemisphere damage were given the Assessment of Art Attributes, which is designed to quantify judgments of descriptive attributes of visual art. Each participant rated 24 paintings on 6 conceptual attributes (depictive accuracy, abstractness, emotion, symbolism, realism, and animacy) and 6 perceptual attributes (depth, color temperature, color saturation, balance, stroke, and simplicity) and their interest in and preference for these paintings. Deviation scores were obtained for each brain-damaged participant for each attribute based on correlations with group average ratings from 30 age-matched healthy participants. Right hemisphere damage affected participants' judgments of abstractness, accuracy, and stroke quality. Damage to areas within different parts of the frontal parietal and lateral temporal cortices produced deviation in judgments in four of six conceptual attributes (abstractness, symbolism, realism, and animacy). Of the formal attributes, only depth was affected by inferior prefrontal damage. No areas of brain damage were associated with deviations in interestingness or preference judgments. The perception of conceptual and formal attributes in artwork may in part dissociate from each other and from evaluative judgments. More generally, this approach demonstrates the feasibility of quantitative approaches to the neuropsychology of art.

  1. Determining the best population-level alcohol consumption model and its impact on estimates of alcohol-attributable harms

    PubMed Central

    2012-01-01

    Background The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs), and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. Methods To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS) and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. Results The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293) (R2 = 0.9207) for women and 1.171 (95% CI: 1.144 to 1.197) (R2 = 0. 9474) for men. Conclusions Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol consumption from population surveys due to its fit, flexibility, and the ease with which it can be modified. The results showed that a large degree of variance of the standard deviation of the alcohol consumption Gamma distribution was explained by the mean alcohol consumption, allowing for alcohol consumption to be modeled through a Gamma distribution using only average consumption. PMID:22490226

  2. Transfer of myelin-specific cells deviated in vitro towards IL-4 production ameliorates ongoing experimental allergic neuritis

    PubMed Central

    Ekerfelt, C; Dahle, C; Weissert, R; Kvarnström, M; Olsson, T; Ernerudh, J

    2001-01-01

    A causal role of IL-4 (Th2) production for recovery in experimental allergic neuritis (EAN) was indicated by experiments where Th1-like autoreactive cell populations, taken from the induction phase of the disease, were deviated to extensive secretion of IL-4 in a selective fashion, by ex vivo stimulation with autoantigen in the presence of IL-4. The deviated cells were adoptively transferred to EAN rats at a time just prior to the onset of clinical signs. This treatment ameliorated EAN compared with sham treatment. This therapeutic approach, with generation of autoreactive IL-4-secreting cells ex vivo followed by subsequent adoptive transfer, may become a new selective treatment of organ-specific autoimmune diseases since, in contrast to previous attempts, it is done in a physiological and technically easy way. PMID:11168007

  3. Using principal component analysis for selecting network behavioral anomaly metrics

    NASA Astrophysics Data System (ADS)

    Gregorio-de Souza, Ian; Berk, Vincent; Barsamian, Alex

    2010-04-01

    This work addresses new approaches to behavioral analysis of networks and hosts for the purposes of security monitoring and anomaly detection. Most commonly used approaches simply implement anomaly detectors for one, or a few, simple metrics and those metrics can exhibit unacceptable false alarm rates. For instance, the anomaly score of network communication is defined as the reciprocal of the likelihood that a given host uses a particular protocol (or destination);this definition may result in an unrealistically high threshold for alerting to avoid being flooded by false positives. We demonstrate that selecting and adapting the metrics and thresholds, on a host-by-host or protocol-by-protocol basis can be done by established multivariate analyses such as PCA. We will show how to determine one or more metrics, for each network host, that records the highest available amount of information regarding the baseline behavior, and shows relevant deviances reliably. We describe the methodology used to pick from a large selection of available metrics, and illustrate a method for comparing the resulting classifiers. Using our approach we are able to reduce the resources required to properly identify misbehaving hosts, protocols, or networks, by dedicating system resources to only those metrics that actually matter in detecting network deviations.

  4. The Schiff angular bremsstrahlung distribution from composite media

    NASA Astrophysics Data System (ADS)

    Taylor, M. L.; Dalton, B.; Franich, R. D.

    2012-12-01

    The Schiff differential for the angular distribution of bremsstrahlung is widely employed, but calculations involving composite materials (i.e. compounds and mixtures) are often undertaken in a somewhat ad hoc fashion. In this work, we suggest an alternative approach to power-law estimates of the effective atomic number utilising Seltzer and Berger's combined approach in order to generate single-valued effective atomic numbers applicable over a large energy range (in the worst case deviation from constancy of about 2% between 10 keV and 1 GeV). Differences with power-law estimates of Z for composites are potentially significant, particularly for low-Z media such as biological or surrogate materials as relevant within the context of medical physics. As an example, soft tissue differs by >70% and cortical bone differs by >85%, while for high-Z composites such as a tungsten-rhenium alloy the difference is of the order of 1%. Use of the normalised Schiff formula for shape only does not exhibit strong Z dependence. Consequently, in such contexts the differences are negligible - the power-law approach overestimates the magnitude by 1.05% in the case of water and underestimates it by <0.1% for the high-Z alloys. The differences in the distribution are most pronounced for small angles and where the bremsstrahlung quanta are low energy.

  5. Consumers in a Sustainable Food Supply Chain (COSUS): Understanding Consumer Behavior to Encourage Food Waste Reduction

    PubMed Central

    Rohm, Harald; Oostindjer, Marije; Aschemann-Witzel, Jessica; Symmank, Claudia; L. Almli, Valérie; de Hooge, Ilona E.; Normann, Anne; Karantininis, Kostas

    2017-01-01

    Consumers are directly and indirectly responsible for a significant fraction of food waste which, for a large part, could be avoided if they were willing to accept food that is suboptimal, i.e., food that deviates in sensory characteristics (odd shape, discoloration), or that has a best-before date which is approaching or has passed, but that is still perfectly fine to eat. The choice to accept or discard suboptimal food is taken either before or after purchase (hence, in the retail store or in the household). The aim of the European research project COSUS (Consumers in a sustainable food supply chain) was to increase consumer acceptance of suboptimal food, before and after purchase, by implementing targeted strategies that are based on consumer insights, and that are feasible for and acceptable by the food sector. To reach this aim, different methodological approaches were applied to analyze this issue, to experiment with different aspects, and to test the resulting interventions. Each of these approaches was undertaken by competent consortium partners from Denmark, Germany, Norway, Sweden and The Netherlands. The project finally provides validated strategies to promote the distribution and consumption of suboptimal foods, thereby improving resource efficiency in the food chain and contributing to a more sustainable food supply. PMID:29186883

  6. Consumers in a Sustainable Food Supply Chain (COSUS): Understanding Consumer Behavior to Encourage Food Waste Reduction.

    PubMed

    Rohm, Harald; Oostindjer, Marije; Aschemann-Witzel, Jessica; Symmank, Claudia; L Almli, Valérie; de Hooge, Ilona E; Normann, Anne; Karantininis, Kostas

    2017-11-27

    Consumers are directly and indirectly responsible for a significant fraction of food waste which, for a large part, could be avoided if they were willing to accept food that is suboptimal, i.e., food that deviates in sensory characteristics (odd shape, discoloration), or that has a best-before date which is approaching or has passed, but that is still perfectly fine to eat. The choice to accept or discard suboptimal food is taken either before or after purchase (hence, in the retail store or in the household). The aim of the European research project COSUS (Consumers in a sustainable food supply chain) was to increase consumer acceptance of suboptimal food, before and after purchase, by implementing targeted strategies that are based on consumer insights, and that are feasible for and acceptable by the food sector. To reach this aim, different methodological approaches were applied to analyze this issue, to experiment with different aspects, and to test the resulting interventions. Each of these approaches was undertaken by competent consortium partners from Denmark, Germany, Norway, Sweden and The Netherlands. The project finally provides validated strategies to promote the distribution and consumption of suboptimal foods, thereby improving resource efficiency in the food chain and contributing to a more sustainable food supply.

  7. Emotions and emotional approach and avoidance strategies in fibromyalgia.

    PubMed

    van Middendorp, Henriët; Lumley, Mark A; Jacobs, Johannes W G; van Doornen, Lorenz J P; Bijlsma, Johannes W J; Geenen, Rinie

    2008-02-01

    Disturbances in emotional functioning may contribute to psychological and physical symptoms in patients with fibromyalgia. This study examined emotions and emotion-regulation strategies in women with fibromyalgia and in controls, and how these variables relate to symptoms of fibromyalgia. We compared 403 women with fibromyalgia to 196 control women using self-report questionnaires. Negative emotions and the use of emotional-avoidance strategies were elevated, and positive emotions were reduced, in fibromyalgia patients; the alexithymia scale "difficulty identifying feelings" showed a large deviation from normal. Emotional-approach measures were not deviant. In the fibromyalgia sample, emotional-avoidance strategies were highly correlated with more mental distress and were modestly correlated with more pain and fatigue, while emotional-approach strategies were only minimally related to better functioning. We tested two interaction models. The intense experiencing of emotions was related to more pain only in patients who lack the ability to process or describe emotions. Although fibromyalgia patients showed deficits in the experiencing of positive affect, positive affect did not buffer the association between pain and negative affect. This study demonstrates increased negative emotions and decreased positive emotions, as well as increased emotional-avoidance strategies, in women with fibromyalgia. Research should test whether interventions that reduce emotional avoidance lead to health improvements in women with fibromyalgia.

  8. An efficient, versatile and scalable pattern growth approach to mine frequent patterns in unaligned protein sequences.

    PubMed

    Ye, Kai; Kosters, Walter A; Ijzerman, Adriaan P

    2007-03-15

    Pattern discovery in protein sequences is often based on multiple sequence alignments (MSA). The procedure can be computationally intensive and often requires manual adjustment, which may be particularly difficult for a set of deviating sequences. In contrast, two algorithms, PRATT2 (http//www.ebi.ac.uk/pratt/) and TEIRESIAS (http://cbcsrv.watson.ibm.com/) are used to directly identify frequent patterns from unaligned biological sequences without an attempt to align them. Here we propose a new algorithm with more efficiency and more functionality than both PRATT2 and TEIRESIAS, and discuss some of its applications to G protein-coupled receptors, a protein family of important drug targets. In this study, we designed and implemented six algorithms to mine three different pattern types from either one or two datasets using a pattern growth approach. We compared our approach to PRATT2 and TEIRESIAS in efficiency, completeness and the diversity of pattern types. Compared to PRATT2, our approach is faster, capable of processing large datasets and able to identify the so-called type III patterns. Our approach is comparable to TEIRESIAS in the discovery of the so-called type I patterns but has additional functionality such as mining the so-called type II and type III patterns and finding discriminating patterns between two datasets. The source code for pattern growth algorithms and their pseudo-code are available at http://www.liacs.nl/home/kosters/pg/.

  9. Early Improper Motion Detection in Golf Swings Using Wearable Motion Sensors: The First Approach

    PubMed Central

    Stančin, Sara; Tomažič, Sašo

    2013-01-01

    This paper presents an analysis of a golf swing to detect improper motion in the early phase of the swing. Led by the desire to achieve a consistent shot outcome, a particular golfer would (in multiple trials) prefer to perform completely identical golf swings. In reality, some deviations from the desired motion are always present due to the comprehensive nature of the swing motion. Swing motion deviations that are not detrimental to performance are acceptable. This analysis is conducted using a golfer's leading arm kinematic data, which are obtained from a golfer wearing a motion sensor that is comprised of gyroscopes and accelerometers. Applying the principal component analysis (PCA) to the reference observations of properly performed swings, the PCA components of acceptable swing motion deviations are established. Using these components, the motion deviations in the observations of other swings are examined. Any unacceptable deviations that are detected indicate an improper swing motion. Arbitrarily long observations of an individual player's swing sequences can be included in the analysis. The results obtained for the considered example show an improper swing motion in early phase of the swing, i.e., the first part of the backswing. An early detection method for improper swing motions that is conducted on an individual basis provides assistance for performance improvement. PMID:23752563

  10. Early improper motion detection in golf swings using wearable motion sensors: the first approach.

    PubMed

    Stančin, Sara; Tomažič, Sašo

    2013-06-10

    This paper presents an analysis of a golf swing to detect improper motion in the early phase of the swing. Led by the desire to achieve a consistent shot outcome, a particular golfer would (in multiple trials) prefer to perform completely identical golf swings. In reality, some deviations from the desired motion are always present due to the comprehensive nature of the swing motion. Swing motion deviations that are not detrimental to performance are acceptable. This analysis is conducted using a golfer's leading arm kinematic data, which are obtained from a golfer wearing a motion sensor that is comprised of gyroscopes and accelerometers. Applying the principal component analysis (PCA) to the reference observations of properly performed swings, the PCA components of acceptable swing motion deviations are established. Using these components, the motion deviations in the observations of other swings are examined. Any unacceptable deviations that are detected indicate an improper swing motion. Arbitrarily long observations of an individual player's swing sequences can be included in the analysis. The results obtained for the considered example show an improper swing motion in early phase of the swing, i.e., the first part of the backswing. An early detection method for improper swing motions that is conducted on an individual basis provides assistance for performance improvement.

  11. Radial gradient and radial deviation radiomic features from pre-surgical CT scans are associated with survival among lung adenocarcinoma patients

    PubMed Central

    Tunali, Ilke; Stringfield, Olya; Guvenis, Albert; Wang, Hua; Liu, Ying; Balagurunathan, Yoganand; Lambin, Philippe; Gillies, Robert J.; Schabath, Matthew B.

    2017-01-01

    The goal of this study was to extract features from radial deviation and radial gradient maps which were derived from thoracic CT scans of patients diagnosed with lung adenocarcinoma and assess whether these features are associated with overall survival. We used two independent cohorts from different institutions for training (n= 61) and test (n= 47) and focused our analyses on features that were non-redundant and highly reproducible. To reduce the number of features and covariates into a single parsimonious model, a backward elimination approach was applied. Out of 48 features that were extracted, 31 were eliminated because they were not reproducible or were redundant. We considered 17 features for statistical analysis and identified a final model containing the two most highly informative features that were associated with lung cancer survival. One of the two features, radial deviation outside-border separation standard deviation, was replicated in a test cohort exhibiting a statistically significant association with lung cancer survival (multivariable hazard ratio = 0.40; 95% confidence interval 0.17-0.97). Additionally, we explored the biological underpinnings of these features and found radial gradient and radial deviation image features were significantly associated with semantic radiological features. PMID:29221183

  12. An auxiliary frequency tracking system for general purpose lock-in amplifiers

    NASA Astrophysics Data System (ADS)

    Xie, Kai; Chen, Liuhao; Huang, Anfeng; Zhao, Kai; Zhang, Hanlu

    2018-04-01

    Lock-in amplifiers (LIAs) are designed to measure weak signals submerged by noise. This is achieved with a signal modulator to avoid low-frequency noise and a narrow-band filter to suppress out-of-band noise. In asynchronous measurement, even a slight frequency deviation between the modulator and the reference may lead to measurement error because the filter’s passband is not flat. Because many commercial LIAs are unable to track frequency deviations, in this paper we propose an auxiliary frequency tracking system. We analyze the measurement error caused by the frequency deviation and propose both a tracking method and an auto-tracking system. This approach requires only three basic parameters, which can be obtained from any general purpose LIA via its communications interface, to calculate the frequency deviation from the phase difference. The proposed auxiliary tracking system is designed as a peripheral connected to the LIA’s serial port, removing the need for an additional power supply. The test results verified the effectiveness of the proposed system; the modified commercial LIA (model SR-850) was able to track the frequency deviation and continuous drift. For step frequency deviations, a steady tracking error of less than 0.001% was achieved within three adjustments, and the worst tracking accuracy was still better than 0.1% for a continuous frequency drift. The tracking system can be used to expand the application scope of commercial LIAs, especially for remote measurements in which the modulation clock and the local reference are separated.

  13. The impact of individual materials parameters on color temperature reproducibility among phosphor converted LED sources

    NASA Astrophysics Data System (ADS)

    Schweitzer, Susanne; Nemitz, Wolfgang; Sommer, Christian; Hartmann, Paul; Fulmek, Paul; Nicolics, Johann; Pachler, Peter; Hoschopf, Hans; Schrank, Franz; Langer, Gregor; Wenzl, Franz P.

    2014-09-01

    For a systematic approach to improve the white light quality of phosphor converted light-emitting diodes (LEDs) for general lighting applications it is imperative to get the individual sources of error for color temperature reproducibility under control. In this regard, it is imperative to understand how compositional, optical and materials properties of the color conversion element (CCE), which typically consists of phosphor particles embedded in a transparent matrix material, affect the constancy of a desired color temperature of a white LED source. In this contribution we use an LED assembly consisting of an LED die mounted on a printed circuit board (PCB) by chip-on-board technology and a CCE with a glob-top configuration as a model system and discuss the impact of potential sources for color temperature deviation among individual devices. Parameters that are investigated include imprecisions in the amount of materials deposition, deviations from the target value for the phosphor concentration in the matrix material, deviations from the target value for the particle sizes of the phosphor material, deviations from the target values for the refractive indexes of phosphor and matrix material as well as deviations from the reflectivity of the substrate surface. From these studies, some general conclusions can be drawn which of these parameters have the largest impact on color deviation and have to be controlled most precisely in a fabrication process in regard of color temperature reproducibility among individual white LED sources.

  14. Note onset deviations as musical piece signatures.

    PubMed

    Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis

    2013-01-01

    A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.

  15. Teaching learning algorithm based optimization of kerf deviations in pulsed Nd:YAG laser cutting of Kevlar-29 composite laminates

    NASA Astrophysics Data System (ADS)

    Gautam, Girish Dutt; Pandey, Arun Kumar

    2018-03-01

    Kevlar is the most popular aramid fiber and most commonly used in different technologically advanced industries for various applications. But the precise cutting of Kevlar composite laminates is a difficult task. The conventional cutting methods face various defects such as delamination, burr formation, fiber pullout with poor surface quality and their mechanical performance is greatly affected by these defects. The laser beam machining may be an alternative of the conventional cutting processes due to its non-contact nature, requirement of low specific energy with higher production rate. But this process also faces some problems that may be minimized by operating the machine at optimum parameters levels. This research paper examines the effective utilization of the Nd:YAG laser cutting system on difficult-to-cut Kevlar-29 composite laminates. The objective of the proposed work is to find the optimum process parameters settings for getting the minimum kerf deviations at both sides. The experiments have been conducted on Kevlar-29 composite laminates having thickness 1.25 mm by using Box-Benkhen design with two center points. The experimental data have been used for the optimization by using the proposed methodology. For the optimization, Teaching learning Algorithm based approach has been employed to obtain the minimum kerf deviation at bottom and top sides. A self coded Matlab program has been developed by using the proposed methodology and this program has been used for the optimization. Finally, the confirmation tests have been performed to compare the experimental and optimum results obtained by the proposed methodology. The comparison results show that the machining performance in the laser beam cutting process has been remarkably improved through proposed approach. Finally, the influence of different laser cutting parameters such as lamp current, pulse frequency, pulse width, compressed air pressure and cutting speed on top kerf deviation and bottom kerf deviation during the Nd:YAG laser cutting of Kevlar-29 laminates have been discussed.

  16. Got power? A systematic review of sample size adequacy in health professions education research.

    PubMed

    Cook, David A; Hatala, Rose

    2015-03-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.

  17. Large short-term deviations from dipolar field during the Levantine Iron Age Geomagnetic Anomaly ca. 1050-700 BCE

    NASA Astrophysics Data System (ADS)

    Shaar, R.; Tauxe, L.; Ebert, Y.

    2017-12-01

    Continuous decadal-resolution paleomagnetic data from archaeological and sedimentary sources in the Levant revealed the existence a local high-field anomaly, which spanned the first 350 years of the first millennium BCE. This so-called "the Levantine Iron Age geomagnetic Anomaly" (LIAA) was characterized by a high averaged geomagnetic field (virtual axial dipole moments, VADM > 140 Z Am2, nearly twice of today's field), short decadal-scale geomagnetic spikes (VADM of 160-185 Z Am2), fast directional and intensity variations, and substantial deviation (20°-25°) from dipole field direction. Similar high field values in the time frame of LIAA have been observed north, and northeast to the Levant: Eastern Anatolia, Turkmenistan, and Georgia. West of the Levant, in the Balkans, field values in the same time are moderate to low. The overall data suggest that the LIAA is a manifestation of a local positive geomagnetic field anomaly similar in magnitude and scale to the presently active negative South Atlantic Anomaly. In this presentation we review the overall archaeomagnetic and sedimentary evidences supporting the local anomaly hypothesis, and compare these observations with today's IGRF field. We analyze the global data during the first two millennia BCE, which suggest some unexpected large deviations from a simple dipolar geomagnetic structure.

  18. Vocal singing by prelingually-deafened children with cochlear implants.

    PubMed

    Xu, Li; Zhou, Ning; Chen, Xiuwu; Li, Yongxin; Schultz, Heather M; Zhao, Xiaoyan; Han, Demin

    2009-09-01

    The coarse pitch information in cochlear implants might hinder the development of singing in prelingually-deafened pediatric users. In the present study, seven prelingually-deafened children with cochlear implants (5.4-12.3 years old) sang one song that was the most familiar to him or her. The control group consisted of 14 normal-hearing children (4.1-8.0 years old). The fundamental frequencies (F0) of each note in the recorded songs were extracted. The following five metrics were computed based on the reference music scores: (1) F0 contour direction of the adjacent notes, (2) F0 compression ratio of the entire song, (3) mean deviation of the normalized F0 across the notes, (4) mean deviation of the pitch intervals, and (5) standard deviation of the note duration differences. Children with cochlear implants showed significantly poorer performance in the pitch-based assessments than the normal-hearing children. No significant differences were seen between the two groups in the rhythm-based measure. Prelingually-deafened children with cochlear implants have significant deficits in singing due to their inability to manipulate pitch in the correct directions and to produce accurate pitch height. Future studies with a large sample size are warranted in order to account for the large variability in singing performance.

  19. Spectral Relative Standard Deviation: A Practical Benchmark in Metabolomics

    EPA Science Inventory

    Metabolomics datasets, by definition, comprise of measurements of large numbers of metabolites. Both technical (analytical) and biological factors will induce variation within these measurements that is not consistent across all metabolites. Consequently, criteria are required to...

  20. Vacuum stability and naturalness in type-II seesaw

    DOE PAGES

    Haba, Naoyuki; Ishida, Hiroyuki; Okada, Nobuchika; ...

    2016-06-16

    Here, we study the vacuum stability and perturbativity conditions in the minimal type-II seesaw model. These conditions give characteristic constraints to the model parameters. In the model, there is a SU(2) L triplet scalar field, which could cause a large Higgs mass correction. From the naturalness point of view, heavy Higgs masses should be lower than 350GeV, which may be testable by the LHC Run-II results. Due to the effects of the triplet scalar field, the branching ratios of the Higgs decay (h → γγ,Zγ) deviate from the standard model, and a large parameter region is excluded by the recentmore » ATLAS and CMS combined analysis of h → γγ. Our result of the signal strength for h → γγ is R γγ ≲ 1.1, but its deviation is too small to observe at the LHC experiment.« less

  1. Large deviations in the random sieve

    NASA Astrophysics Data System (ADS)

    Grimmett, Geoffrey

    1997-05-01

    The proportion [rho]k of gaps with length k between square-free numbers is shown to satisfy log[rho]k=[minus sign](1+o(1))(6/[pi]2) klogk as k[rightward arrow][infty infinity]. Such asymptotics are consistent with Erdos's challenge to prove that the gap following the square-free number t is smaller than clogt/log logt, for all t and some constant c satisfying c>[pi]2/12. The results of this paper are achieved by studying the probabilities of large deviations in a certain ‘random sieve’, for which the proportions [rho]k have representations as probabilities. The asymptotic form of [rho]k may be obtained in situations of greater generality, when the squared primes are replaced by an arbitrary sequence (sr) of relatively prime integers satisfying [sum L: summation operator]r1/sr<[infty infinity], subject to two further conditions of regularity on this sequence.

  2. Simple programmable voltage reference for low frequency noise measurements

    NASA Astrophysics Data System (ADS)

    Ivanov, V. E.; Chye, En Un

    2018-05-01

    The paper presents a circuit design of a low-noise voltage reference based on an electric double-layer capacitor, a microcontroller and a general purpose DAC. A large capacitance value (1F and more) makes it possible to create low-pass filter with a large time constant, effectively reducing low-frequency noise beyond its bandwidth. Choosing the optimum value of the resistor in the RC filter, one can achieve the best ratio between the transient time, the deviation of the output voltage from the set point and the minimum noise cut-off frequency. As experiments have shown, the spectral density of the voltage at a frequency of 1 kHz does not exceed 1.2 nV/√Hz the maximum deviation of the output voltage from the predetermined does not exceed 1.4 % and depends on the holding time of the previous value. Subsequently, this error is reduced to a constant value and can be compensated.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    The results of a search for gluinos in final states with an isolated electron or muon, multiple jets and large missing transverse momentum using proton–proton collision data at a centre-of-mass energy ofmore » $$\\sqrt{s}$$ = 13 Te V are presented. The dataset used was recorded in 2015 by the ATLAS experiment at the Large Hadron Collider and corresponds to an integrated luminosity of 3.2 fb -1 . Six signal selections are defined that best exploit the signal characteristics. The data agree with the Standard Model background expectation in all six signal selections, and the largest deviation is a 2.1 standard deviation excess. The results are interpreted in a simplified model where pair-produced gluinos decay via the lightest chargino to the lightest neutralino. In this model, gluinos are excluded up to masses of approximately 1.6 Te V depending on the mass spectrum of the simplified model, thus surpassing the limits of previous searches.« less

  4. Geometric phase for a two-level system in photonic band gab crystal

    NASA Astrophysics Data System (ADS)

    Berrada, K.

    2018-05-01

    In this work, we investigate the geometric phase (GP) for a qubit system coupled to its own anisotropic and isotropic photonic band gap (PBG) crystal environment without Born or Markovian approximation. The qubit frequency affects the GP of the qubit directly through the effect of the PBG environment. The results show the deviation of the GP depends on the detuning parameter and this deviation will be large for relatively large detuning of atom frequency inside the gap with respect to the photonic band edge. Whereas for detunings outside the gap, the GP of the qubit changes abruptly to zero, exhibiting collapse phenomenon of the GP. Moreover, we find that the GP in the isotropic PBG photonic crystal is more robust than that in the anisotropic PBG under the same condition. Finally, we explore the relationship between the variation of the GP and population in terms of the physical parameters.

  5. Large-visual-angle microstructure inspired from quantitative design of Morpho butterflies' lamellae deviation using the FDTD/PSO method.

    PubMed

    Wang, Wanlin; Zhang, Wang; Chen, Weixin; Gu, Jiajun; Liu, Qinglei; Deng, Tao; Zhang, Di

    2013-01-15

    The wide angular range of the treelike structure in Morpho butterfly scales was investigated by finite-difference time-domain (FDTD)/particle-swarm-optimization (PSO) analysis. Using the FDTD method, different parameters in the Morpho butterflies' treelike structure were studied and their contributions to the angular dependence were analyzed. Then a wide angular range was realized by the PSO method from quantitatively designing the lamellae deviation (Δy), which was a crucial parameter with angular range. The field map of the wide-range reflection in a large area was given to confirm the wide angular range. The tristimulus values and corresponding color coordinates for various viewing directions were calculated to confirm the blue color in different observation angles. The wide angular range realized by the FDTD/PSO method will assist us in understanding the scientific principles involved and also in designing artificial optical materials.

  6. Application of FLEET Velocimetry in the NASA Langley 0.3-meter Transonic Cryogenic Tunnel

    NASA Technical Reports Server (NTRS)

    Burns, Ross A.; Danehy, Paul M.; Halls, Benjamin R.; Jiang, Naibo

    2015-01-01

    Femtosecond laser electronic excitation and tagging (FLEET) velocimetry is demonstrated in a large-scale transonic cryogenic wind tunnel. Test conditions include total pressures, total temperatures, and Mach numbers ranging from 15 to 58 psia, 200 to 295 K, and 0.2 to 0.75, respectively. Freestream velocity measurements exhibit accuracies within 1 percent and precisions better than 1 m/s. The measured velocities adhere closely to isentropic flow theory over the domain of temperatures and pressures that were tested. Additional velocity measurements are made within the tunnel boundary layer; virtual trajectories traced out by the FLEET signal are indicative of the characteristic turbulent behavior in this region of the flow, where the unsteadiness increases demonstrably as the wall is approached. Mean velocities taken within the boundary layer are in agreement with theoretical velocity profiles, though the fluctuating velocities exhibit a greater deviation from theoretical predictions.

  7. Fast large-scale clustering of protein structures using Gauss integrals.

    PubMed

    Harder, Tim; Borg, Mikael; Boomsma, Wouter; Røgen, Peter; Hamelryck, Thomas

    2012-02-15

    Clustering protein structures is an important task in structural bioinformatics. De novo structure prediction, for example, often involves a clustering step for finding the best prediction. Other applications include assigning proteins to fold families and analyzing molecular dynamics trajectories. We present Pleiades, a novel approach to clustering protein structures with a rigorous mathematical underpinning. The method approximates clustering based on the root mean square deviation by first mapping structures to Gauss integral vectors--which were introduced by Røgen and co-workers--and subsequently performing K-means clustering. Compared to current methods, Pleiades dramatically improves on the time needed to perform clustering, and can cluster a significantly larger number of structures, while providing state-of-the-art results. The number of low energy structures generated in a typical folding study, which is in the order of 50,000 structures, can be clustered within seconds to minutes.

  8. Infrared Stark and Zeeman spectroscopy of OH–CO: The entrance channel complex along the OH + CO → trans-HOCO reaction pathway

    DOE PAGES

    Brice, Joseph T.; Liang, Tao; Raston, Paul L.; ...

    2016-09-27

    Here, sequential capture of OH and CO by superfluid helium droplets leads exclusively to the formation of the linear, entrance-channel complex, OH-CO. This species is characterized by infrared laser Stark and Zeeman spectroscopy via measurements of the fundamental OH stretching vibration. Experimental dipole moments are in disagreement with ab initio calculations at the equilibrium geometry, indicating large-amplitude motion on the ground state potential energy surface. Vibrational averaging along the hydroxyl bending coordinate recovers 80% of the observed deviation from the equilibrium dipole moment. Inhomogeneous line broadening in the zero-field spectrum is modeled with an effective Hamiltonian approach that aims tomore » account for the anisotropic molecule-helium interaction potential that arises as the OH-CO complex is displaced from the center of the droplet.« less

  9. Solid State Chemistry of Clathrate Phases: Crystal Structure, Chemical Bonding and Preparation Routes

    NASA Astrophysics Data System (ADS)

    Baitinger, Michael; Böhme, Bodo; Ormeci, Alim; Grin, Yuri

    Clathrates represent a family of inorganic materials called cage compounds. The key feature of their crystal structures is a three-dimensional (host) framework bearing large cavities (cages) with 20-28 vertices. These polyhedral cages bear—as a rule—guest species. Depending on the formal charge of the framework, clathrates are grouped in anionic, cationic and neutral. While the bonding in the framework is of (polar) covalent nature, the guest-host interaction can be ionic, covalent or even van-der Waals, depending on the chemical composition of the clathrates. The chemical composition and structural features of the cationic clathrates can be described by the enhanced Zintl concept, whereas the composition of the anionic clathrates deviates often from the Zintl counts, indicating additional atomic interactions in comparison with the ionic-covalent Zintl model. These interactions can be visualized and studied by applying modern quantum chemical approaches such as electron localizability.

  10. A microphysical parameterization of aqSOA and sulfate formation in clouds

    NASA Astrophysics Data System (ADS)

    McVay, Renee; Ervens, Barbara

    2017-07-01

    Sulfate and secondary organic aerosol (cloud aqSOA) can be chemically formed in cloud water. Model implementation of these processes represents a computational burden due to the large number of microphysical and chemical parameters. Chemical mechanisms have been condensed by reducing the number of chemical parameters. Here an alternative is presented to reduce the number of microphysical parameters (number of cloud droplet size classes). In-cloud mass formation is surface and volume dependent due to surface-limited oxidant uptake and/or size-dependent pH. Box and parcel model simulations show that using the effective cloud droplet diameter (proportional to total volume-to-surface ratio) reproduces sulfate and aqSOA formation rates within ≤30% as compared to full droplet distributions; other single diameters lead to much greater deviations. This single-class approach reduces computing time significantly and can be included in models when total liquid water content and effective diameter are available.

  11. Investigation of aircraft landing in variable wind fields

    NASA Technical Reports Server (NTRS)

    Frost, W.; Reddy, K. R.

    1978-01-01

    A digital simulation study is reported of the effects of gusts and wind shear on the approach and landing of aircraft. The gusts and wind shear are primarily those associated with wind fields created by surface wind passing around bluff geometries characteristic of buildings. Also, flight through a simple model of a thunderstorm is investigated. A two-dimensional model of aircraft motion was represented by a set of nonlinear equations which accounted for both spatial and temporal variations of winds. The landings of aircraft with the characteristics of a DC-8 and a DHC-6 were digitally simulated under different wind conditions with fixed and automatic controls. The resulting deviations in touchdown points and the controls that are required to maintain the desired flight path are presented. The presence of large bluff objects, such as buildings in the flight path is shown to have considerable effect on aircraft landings.

  12. Trajectory phase transitions and dynamical Lee-Yang zeros of the Glauber-Ising chain.

    PubMed

    Hickey, James M; Flindt, Christian; Garrahan, Juan P

    2013-07-01

    We examine the generating function of the time-integrated energy for the one-dimensional Glauber-Ising model. At long times, the generating function takes on a large-deviation form and the associated cumulant generating function has singularities corresponding to continuous trajectory (or "space-time") phase transitions between paramagnetic trajectories and ferromagnetically or antiferromagnetically ordered trajectories. In the thermodynamic limit, the singularities make up a whole curve of critical points in the complex plane of the counting field. We evaluate analytically the generating function by mapping the generator of the biased dynamics to a non-Hermitian Hamiltonian of an associated quantum spin chain. We relate the trajectory phase transitions to the high-order cumulants of the time-integrated energy which we use to extract the dynamical Lee-Yang zeros of the generating function. This approach offers the possibility to detect continuous trajectory phase transitions from the finite-time behavior of measurable quantities.

  13. Static test induced loads verification beyond elastic limit

    NASA Technical Reports Server (NTRS)

    Verderaime, V.; Harrington, F.

    1996-01-01

    Increasing demands for reliable and least-cost high-performance aerostructures are pressing design analyses, materials, and manufacturing processes to new and narrowly experienced performance and verification technologies. This study assessed the adequacy of current experimental verification of the traditional binding ultimate safety factor which covers rare events in which no statistical design data exist. Because large high-performance structures are inherently very flexible, boundary rotations and deflections under externally applied loads approaching fracture may distort their transmission and unknowingly accept submarginal structures or prematurely fracturing reliable ones. A technique was developed, using measured strains from back-to-back surface mounted gauges, to analyze, define, and monitor induced moments and plane forces through progressive material changes from total-elastic to total-inelastic zones within the structural element cross section. Deviations from specified test loads are identified by the consecutively changing ratios of moment-to-axial load.

  14. Static test induced loads verification beyond elastic limit

    NASA Technical Reports Server (NTRS)

    Verderaime, V.; Harrington, F.

    1996-01-01

    Increasing demands for reliable and least-cost high performance aerostructures are pressing design analyses, materials, and manufacturing processes to new and narrowly experienced performance and verification technologies. This study assessed the adequacy of current experimental verification of the traditional binding ultimate safety factor which covers rare events in which no statistical design data exist. Because large, high-performance structures are inherently very flexible, boundary rotations and deflections under externally applied loads approaching fracture may distort their transmission and unknowingly accept submarginal structures or prematurely fracturing reliable ones. A technique was developed, using measured strains from back-to-back surface mounted gauges, to analyze, define, and monitor induced moments and plane forces through progressive material changes from total-elastic to total inelastic zones within the structural element cross section. Deviations from specified test loads are identified by the consecutively changing ratios of moment-to-axial load.

  15. Phase transitions in trajectories of a superconducting single-electron transistor coupled to a resonator.

    PubMed

    Genway, Sam; Garrahan, Juan P; Lesanovsky, Igor; Armour, Andrew D

    2012-05-01

    Recent progress in the study of dynamical phase transitions has been made with a large-deviation approach to study trajectories of stochastic jumps using a thermodynamic formalism. We study this method applied to an open quantum system consisting of a superconducting single-electron transistor, near the Josephson quasiparticle resonance, coupled to a resonator. We find that the dynamical behavior shown in rare trajectories can be rich even when the mean dynamical activity is small, and thus the formalism gives insights into the form of fluctuations. The structure of the dynamical phase diagram found from the quantum-jump trajectories of the resonator is studied, and we see that sharp transitions in the dynamical activity may be related to the appearance and disappearance of bistabilities in the state of the resonator as system parameters are changed. We also demonstrate that for a fast resonator, the trajectories of quasiparticles are similar to the resonator trajectories.

  16. Integrating hidden Markov model and PRAAT: a toolbox for robust automatic speech transcription

    NASA Astrophysics Data System (ADS)

    Kabir, A.; Barker, J.; Giurgiu, M.

    2010-09-01

    An automatic time-aligned phone transcription toolbox of English speech corpora has been developed. Especially the toolbox would be very useful to generate robust automatic transcription and able to produce phone level transcription using speaker independent models as well as speaker dependent models without manual intervention. The system is based on standard Hidden Markov Models (HMM) approach and it was successfully experimented over a large audiovisual speech corpus namely GRID corpus. One of the most powerful features of the toolbox is the increased flexibility in speech processing where the speech community would be able to import the automatic transcription generated by HMM Toolkit (HTK) into a popular transcription software, PRAAT, and vice-versa. The toolbox has been evaluated through statistical analysis on GRID data which shows that automatic transcription deviates by an average of 20 ms with respect to manual transcription.

  17. Analysis of condensation on a horizontal cylinder with unknown wall temperature and comparison with the Nusselt model of film condensation

    NASA Technical Reports Server (NTRS)

    Bahrami, Parviz A.

    1996-01-01

    Theoretical analysis and numerical computations are performed to set forth a new model of film condensation on a horizontal cylinder. The model is more general than the well-known Nusselt model of film condensation and is designed to encompass all essential features of the Nusselt model. It is shown that a single parameter, constructed explicitly and without specification of the cylinder wall temperature, determines the degree of departure from the Nusselt model, which assumes a known and uniform wall temperature. It is also known that the Nusselt model is reached for very small, as well as very large, values of this parameter. In both limiting cases the cylinder wall temperature assumes a uniform distribution and the Nusselt model is approached. The maximum deviations between the two models is rather small for cases which are representative of cylinder dimensions, materials and conditions encountered in practice.

  18. A unifying theory for top-heavy ecosystem structure in the ocean.

    PubMed

    Woodson, C Brock; Schramski, John R; Joye, Samantha B

    2018-01-02

    Size generally dictates metabolic requirements, trophic level, and consequently, ecosystem structure, where inefficient energy transfer leads to bottom-heavy ecosystem structure and biomass decreases as individual size (or trophic level) increases. However, many animals deviate from simple size-based predictions by either adopting generalist predatory behavior, or feeding lower in the trophic web than predicted from their size. Here we show that generalist predatory behavior and lower trophic feeding at large body size increase overall biomass and shift ecosystems from a bottom-heavy pyramid to a top-heavy hourglass shape, with the most biomass accounted for by the largest animals. These effects could be especially dramatic in the ocean, where primary producers are the smallest components of the ecosystem. This approach makes it possible to explore and predict, in the past and in the future, the structure of ocean ecosystems without biomass extraction and other impacts.

  19. Curvature-driven bubbles or droplets on the spiral surface

    NASA Astrophysics Data System (ADS)

    Li, Shanpeng; Liu, Jianlin; Hou, Jian

    2016-11-01

    Directional motion of droplets or bubbles can often be observed in nature and our daily life, and this phenomenon holds great potential in many engineering areas. The study shows that droplets or bubbles can be driven to migrate perpetually on some special substrates, such as the Archimedean spiral, the logarithmic spiral and a cantilever sheet in large deflection. It is found that a bubble approaches or deviates from the position with highest curvature of the substrate, when it is on the concave or convex side. This fact is helpful to explain the repelling water capability of Nepenthes alata. Based on the force and energy analysis, the mechanism of the bubble migration is well addressed. These findings pave a new way to accurately manipulate droplet or bubble movement, which bring inspirations to the design of microfluidic and water harvesting devices, as well as oil displacement and ore filtration.

  20. Molecular dynamics simulation of premelting and melting phase transitions in stoichiometric uranium dioxide

    NASA Astrophysics Data System (ADS)

    Yakub, Eugene; Ronchi, Claudio; Staicu, Dragos

    2007-09-01

    Results of molecular dynamics (MD) simulation of UO2 in a wide temperature range are presented and discussed. A new approach to the calibration of a partly ionic Busing-Ida-type model is proposed. A potential parameter set is obtained reproducing the experimental density of solid UO2 in a wide range of temperatures. A conventional simulation of the high-temperature stoichiometric UO2 on large MD cells, based on a novel fast method of computation of Coulomb forces, reveals characteristic features of a premelting λ transition at a temperature near to that experimentally observed (Tλ=2670K ). A strong deviation from the Arrhenius behavior of the oxygen self-diffusion coefficient was found in the vicinity of the transition point. Predictions for liquid UO2, based on the same potential parameter set, are in good agreement with existing experimental data and theoretical calculations.

  1. Efficient extraction strategies of tea (Camellia sinensis) biomolecules.

    PubMed

    Banerjee, Satarupa; Chatterjee, Jyotirmoy

    2015-06-01

    Tea is a popular daily beverage worldwide. Modulation and modifications of its basic components like catechins, alkaloids, proteins and carbohydrate during fermentation or extraction process changes organoleptic, gustatory and medicinal properties of tea. Through these processes increase or decrease in yield of desired components are evident. Considering the varied impacts of parameters in tea production, storage and processes that affect the yield, extraction of tea biomolecules at optimized condition is thought to be challenging. Implementation of technological advancements in green chemistry approaches can minimize the deviation retaining maximum qualitative properties in environment friendly way. Existed extraction processes with optimization parameters of tea have been discussed in this paper including its prospects and limitations. This exhaustive review of various extraction parameters, decaffeination process of tea and large scale cost effective isolation of tea components with aid of modern technology can assist people to choose extraction condition of tea according to necessity.

  2. Network bandwidth utilization forecast model on high bandwidth networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wuchert; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less

  3. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less

  4. In-situ health monitoring of piezoelectric sensors using electromechanical impedance: A numerical perspective

    NASA Astrophysics Data System (ADS)

    Bilgunde, Prathamesh N.; Bond, Leonard J.

    2018-04-01

    Current work presents a numerical investigation to classify the in-situ health of the piezoelectric sensors deployed for structural health monitoring (SHM) of large civil, aircraft and automotive structures. The methodology proposed in this work attempts to model the in-homogeneities in the adhesive with which typically the sensor is bonded to the structure for SHM. It was found that weakening of the bond state causes reduction in the resonance frequency of the structure and eventually approaches the resonance characteristics of a piezoelectric material under traction-free boundary conditions. These changes in the resonance spectrum are further quantified using root mean square deviation-based damage index. Results demonstrate that the electromechanical impedance method can be used to monitor structural integrity of the sensor bonded to the host structure. This cost-effective method can potentially reduce misinterpretation of SHM data for critical infrastructures.

  5. Lunar brightness temperature from Microwave Radiometers data of Chang'E-1 and Chang'E-2

    NASA Astrophysics Data System (ADS)

    Feng, J.-Q.; Su, Y.; Zheng, L.; Liu, J.-J.

    2011-10-01

    Both of the Chinese lunar orbiter, Chang'E-1 and Chang'E-2 carried Microwave Radiometers (MRM) to obtain the brightness temperature of the Moon. Based on the different characteristics of these two MRMs, modified algorithms of brightness temperature and specific ground calibration parameters were proposed, and the corresponding lunar global brightness temperature maps were made here. In order to analyze the data distributions of these maps, normalization method was applied on the data series. The second channel data with large deviations were rectified, and the reasons of deviations were analyzed in the end.

  6. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  7. Impact of buildings on surface solar radiation over urban Beijing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Bin; Liou, Kuo-Nan; Gu, Yu

    The rugged surface of an urban area due to varying buildings can interact with solar beams and affect both the magnitude and spatiotemporal distribution of surface solar fluxes. Here we systematically examine the impact of buildings on downward surface solar fluxes over urban Beijing by using a 3-D radiation parameterization that accounts for 3-D building structures vs. the conventional plane-parallel scheme. We find that the resulting downward surface solar flux deviations between the 3-D and the plane-parallel schemes are generally ±1–10 W m -2 at 800 m grid resolution and within ±1 W m -2 at 4 km resolution. Pairsmore » of positive–negative flux deviations on different sides of buildings are resolved at 800 m resolution, while they offset each other at 4 km resolution. Flux deviations from the unobstructed horizontal surface at 4 km resolution are positive around noon but negative in the early morning and late afternoon. The corresponding deviations at 800 m resolution, in contrast, show diurnal variations that are strongly dependent on the location of the grids relative to the buildings. Both the magnitude and spatiotemporal variations of flux deviations are largely dominated by the direct flux. Furthermore, we find that flux deviations can potentially be an order of magnitude larger by using a finer grid resolution. Atmospheric aerosols can reduce the magnitude of downward surface solar flux deviations by 10–65 %, while the surface albedo generally has a rather moderate impact on flux deviations. The results imply that the effect of buildings on downward surface solar fluxes may not be critically significant in mesoscale atmospheric models with a grid resolution of 4 km or coarser. However, the effect can play a crucial role in meso-urban atmospheric models as well as microscale urban dispersion models with resolutions of 1 m to 1 km.« less

  8. Severity of Illness Scores May Misclassify Critically Ill Obese Patients.

    PubMed

    Deliberato, Rodrigo Octávio; Ko, Stephanie; Komorowski, Matthieu; Armengol de La Hoz, M A; Frushicheva, Maria P; Raffa, Jesse D; Johnson, Alistair E W; Celi, Leo Anthony; Stone, David J

    2018-03-01

    Severity of illness scores rest on the assumption that patients have normal physiologic values at baseline and that patients with similar severity of illness scores have the same degree of deviation from their usual state. Prior studies have reported differences in baseline physiology, including laboratory markers, between obese and normal weight individuals, but these differences have not been analyzed in the ICU. We compared deviation from baseline of pertinent ICU laboratory test results between obese and normal weight patients, adjusted for the severity of illness. Retrospective cohort study in a large ICU database. Tertiary teaching hospital. Obese and normal weight patients who had laboratory results documented between 3 days and 1 year prior to hospital admission. None. Seven hundred sixty-nine normal weight patients were compared with 1,258 obese patients. After adjusting for the severity of illness score, age, comorbidity index, baseline laboratory result, and ICU type, the following deviations were found to be statistically significant: WBC 0.80 (95% CI, 0.27-1.33) × 10/L; p = 0.003; log (blood urea nitrogen) 0.01 (95% CI, 0.00-0.02); p = 0.014; log (creatinine) 0.03 (95% CI, 0.02-0.05), p < 0.001; with all deviations higher in obese patients. A logistic regression analysis suggested that after adjusting for age and severity of illness at least one of these deviations had a statistically significant effect on hospital mortality (p = 0.009). Among patients with the same severity of illness score, we detected clinically small but significant deviations in WBC, creatinine, and blood urea nitrogen from baseline in obese compared with normal weight patients. These small deviations are likely to be increasingly important as bigger data are analyzed in increasingly precise ways. Recognition of the extent to which all critically ill patients may deviate from their own baseline may improve the objectivity, precision, and generalizability of ICU mortality prediction and severity adjustment models.

  9. Methodological Issues in Meta-Analyzing Standard Deviations: Comment on Bond and DePaulo (2008)

    ERIC Educational Resources Information Center

    Pigott, Therese D.; Wu, Meng-Jia

    2008-01-01

    In this comment on C. F. Bond and B. M. DePaulo, the authors raise methodological concerns about the approach used to analyze the data. The authors suggest further refinement of the procedures used, and they compare the approach taken by Bond and DePaulo with standard methods for meta-analysis. (Contains 1 table and 2 figures.)

  10. Diode‐based transmission detector for IMRT delivery monitoring: a validation study

    PubMed Central

    Li, Taoran; Wu, Q. Jackie; Matzen, Thomas; Yin, Fang‐Fang

    2016-01-01

    The purpose of this work was to evaluate the potential of a new transmission detector for real‐time quality assurance of dynamic‐MLC‐based radiotherapy. The accuracy of detecting dose variation and static/dynamic MLC position deviations was measured, as well as the impact of the device on the radiation field (surface dose, transmission). Measured dose variations agreed with the known variations within 0.3%. The measurement of static and dynamic MLC position deviations matched the known deviations with high accuracy (0.7–1.2 mm). The absorption of the device was minimal (∼ 1%). The increased surface dose was small (1%–9%) but, when added to existing collimator scatter effects could become significant at large field sizes (≥30×30 cm2). Overall the accuracy and speed of the device show good potential for real‐time quality assurance. PACS number(s): 87.55.Qr PMID:27685115

  11. Determination of the optimal level for combining area and yield estimates

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.

    1981-01-01

    Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.

  12. Effects of vegetation canopy structure on remotely sensed canopy temperatures. [inferring plant water stress and yield

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.

    1979-01-01

    The effects of vegetation canopy structure on thermal infrared sensor response must be understood before vegetation surface temperatures of canopies with low percent ground cover can be accurately inferred. The response of a sensor is a function of vegetation geometric structure, the vertical surface temperature distribution of the canopy components, and sensor view angle. Large deviations between the nadir sensor effective radiant temperature (ERT) and vegetation ERT for a soybean canopy were observed throughout the growing season. The nadir sensor ERT of a soybean canopy with 35 percent ground cover deviated from the vegetation ERT by as much as 11 C during the mid-day. These deviations were quantitatively explained as a function of canopy structure and soil temperature. Remote sensing techniques which determine the vegetation canopy temperature(s) from the sensor response need to be studied.

  13. Uncertainty of large-area estimates of indicators of forest structural gamma diversity: A study based on national forest inventory data

    Treesearch

    Susanne Winter; Andreas Böck; Ronald E. McRoberts

    2012-01-01

    Tree diameter and height are commonly measured forest structural variables, and indicators based on them are candidates for assessing forest diversity. We conducted our study on the uncertainty of estimates for mostly large geographic scales for four indicators of forest structural gamma diversity: mean tree diameter, mean tree height, and standard deviations of tree...

  14. Global Behavior in Large Scale Systems

    DTIC Science & Technology

    2013-12-05

    release. AIR FORCE RESEARCH LABORATORY AF OFFICE OF SCIENTIFIC RESEARCH (AFOSR)/RSL ARLINGTON, VIRGINIA 22203 AIR FORCE MATERIEL COMMAND AFRL-OSR-VA...and Research 875 Randolph Street, Suite 325 Room 3112, Arlington, VA 22203 December 3, 2013 1 Abstract This research attained two main achievements: 1...microscopic random interactions among the agents. 2 1 Introduction In this research we considered two main problems: 1) large deviation error performance in

  15. Sub-Scale Analysis of New Large Aircraft Pool Fire-Suppression

    DTIC Science & Technology

    2016-01-01

    discrete ordinates radiation and single step Khan and Greeves soot model provided radiation and soot interaction. Agent spray dynamics were...Notable differences observed showed a modeled increase in the mockup surface heat-up rate as well as a modeled decreased rate of soot production...488 K SUPPRESSION STARTED  Large deviation between sensors due to sensor alignment challenges and asymmetric fuel surface ignition  Unremarkable

  16. Measuring Diameters Of Large Vessels

    NASA Technical Reports Server (NTRS)

    Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.

    1990-01-01

    Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.

  17. Data assimilation in the low noise regime

    NASA Astrophysics Data System (ADS)

    Weare, J.; Vanden-Eijnden, E.

    2012-12-01

    On-line data assimilation techniques such as ensemble Kalman filters and particle filters tend to lose accuracy dramatically when presented with an unlikely observation. Such observation may be caused by an unusually large measurement error or reflect a rare fluctuation in the dynamics of the system. Over a long enough span of time it becomes likely that one or several of these events will occur. In some cases they are signatures of the most interesting features of the underlying system and their prediction becomes the primary focus of the data assimilation procedure. The Kuroshio or Black Current that runs along the eastern coast of Japan is an example of just such a system. It undergoes infrequent but dramatic changes of state between a small meander during which the current remains close to the coast of Japan, and a large meander during which the current bulges away from the coast. Because of the important role that the Kuroshio plays in distributing heat and salinity in the surrounding region, prediction of these transitions is of acute interest. { Here we focus on a regime in which both the stochastic forcing on the system and the observational noise are small. In this setting large deviation theory can be used to understand why standard filtering methods fail and guide the design of the more effective data assimilation techniques. Motivated by our large deviations analysis we propose several data assimilation strategies capable of efficiently handling rare events such as the transitions of the Kuroshio. These techniques are tested on a model of the Kuroshio and shown to perform much better than standard filtering methods.Here the sequence of observations (circles) are taken directly from one of our Kuroshio model's transition events from the small meander to the large meander. We tested two new algorithms (Algorithms 3 and 4 in the legend) motivated by our large deviations analysis as well as a standard particle filter and an ensemble Kalman filter. The parameters of each algorithm are chosen so that their costs are comparable. The particle filter and an ensemble Kalman filter fail to accurately track the transition. Algorithms 3 and 4 maintain accuracy (and smaller scale resolution) throughout the transition.

  18. An integrated phenomic approach to multivariate allelic association

    PubMed Central

    Medland, Sarah Elizabeth; Neale, Michael Churton

    2010-01-01

    The increased feasibility of genome-wide association has resulted in association becoming the primary method used to localize genetic variants that cause phenotypic variation. Much attention has been focused on the vast multiple testing problems arising from analyzing large numbers of single nucleotide polymorphisms. However, the inflation of experiment-wise type I error rates through testing numerous phenotypes has received less attention. Multivariate analyses can be used to detect both pleiotropic effects that influence a latent common factor, and monotropic effects that operate at a variable-specific levels, whilst controlling for non-independence between phenotypes. In this study, we present a maximum likelihood approach, which combines both latent and variable-specific tests and which may be used with either individual or family data. Simulation results indicate that in the presence of factor-level association, the combined multivariate (CMV) analysis approach performs well with a minimal loss of power as compared with a univariate analysis of a factor or sum score (SS). As the deviation between the pattern of allelic effects and the factor loadings increases, the power of univariate analyses of both factor and SSs decreases dramatically, whereas the power of the CMV approach is maintained. We show the utility of the approach by examining the association between dopamine receptor D2 TaqIA and the initiation of marijuana, tranquilizers and stimulants in data from the Add Health Study. Perl scripts that takes ped and dat files as input and produces Mx scripts and data for running the CMV approach can be downloaded from www.vipbg.vcu.edu/~sarahme/WriteMx. PMID:19707246

  19. Probability distributions of linear statistics in chaotic cavities and associated phase transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivo, Pierpaolo; Majumdar, Satya N.; Bohigas, Oriol

    2010-03-01

    We establish large deviation formulas for linear statistics on the N transmission eigenvalues (T{sub i}) of a chaotic cavity, in the framework of random matrix theory. Given any linear statistics of interest A=SIGMA{sub i=1}{sup N}a(T{sub i}), the probability distribution P{sub A}(A,N) of A generically satisfies the large deviation formula lim{sub N-}>{sub i}nfinity[-2 log P{sub A}(Nx,N)/betaN{sup 2}]=PSI{sub A}(x), where PSI{sub A}(x) is a rate function that we compute explicitly in many cases (conductance, shot noise, and moments) and beta corresponds to different symmetry classes. Using these large deviation expressions, it is possible to recover easily known results and to produce newmore » formulas, such as a closed form expression for v(n)=lim{sub N-}>{sub i}nfinity var(T{sub n}) (where T{sub n}=SIGMA{sub i}T{sub i}{sup n}) for arbitrary integer n. The universal limit v*=lim{sub n-}>{sub i}nfinity v(n)=1/2pibeta is also computed exactly. The distributions display a central Gaussian region flanked on both sides by non-Gaussian tails. At the junction of the two regimes, weakly nonanalytical points appear, a direct consequence of phase transitions in an associated Coulomb gas problem. Numerical checks are also provided, which are in full agreement with our asymptotic results in both real and Laplace space even for moderately small N. Part of the results have been announced by Vivo et al. [Phys. Rev. Lett. 101, 216809 (2008)].« less

  20. Effects of Aftershock Declustering in Risk Modeling: Case Study of a Subduction Sequence in Mexico

    NASA Astrophysics Data System (ADS)

    Kane, D. L.; Nyst, M.

    2014-12-01

    Earthquake hazard and risk models often assume that earthquake rates can be represented by a stationary Poisson process, and that aftershocks observed in historical seismicity catalogs represent a deviation from stationarity that must be corrected before earthquake rates are estimated. Algorithms for classifying individual earthquakes as independent mainshocks or as aftershocks vary widely, and analysis of a single catalog can produce considerably different earthquake rates depending on the declustering method implemented. As these rates are propagated through hazard and risk models, the modeled results will vary due to the assumptions implied by these choices. In particular, the removal of large aftershocks following a mainshock may lead to an underestimation of the rate of damaging earthquakes and potential damage due to a large aftershock may be excluded from the model. We present a case study based on the 1907 - 1911 sequence of nine 6.9 <= Mw <= 7.9 earthquakes along the Cocos - North American plate subduction boundary in Mexico in order to illustrate the variability in risk under various declustering approaches. Previous studies have suggested that subduction zone earthquakes in Mexico tend to occur in clusters, and this particular sequence includes events that would be labeled as aftershocks in some declustering approaches yet are large enough to produce significant damage. We model the ground motion for each event, determine damage ratios using modern exposure data, and then compare the variability in the modeled damage from using the full catalog or one of several declustered catalogs containing only "independent" events. We also consider the effects of progressive damage caused by each subsequent event and how this might increase or decrease the total losses expected from this sequence.

  1. SU-E-T-272: Direct Verification of a Treatment Planning System Megavoltage Linac Beam Photon Spectra Models, and Analysis of the Effects On Patient Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leheta, D; Shvydka, D; Parsai, E

    2015-06-15

    Purpose: For the photon dose calculation Philips Pinnacle Treatment Planning System (TPS) uses collapsed cone convolution algorithm, which relies on energy spectrum of the beam in computing the scatter component. The spectrum is modeled based on Linac’s standard commissioning data and typically is not independently verified. We explored a methodology of using transmission measurements in combination with regularization data processing to unfold Linac spectra. The measured spectra were compared to those modeled by the TPS, and the effect on patient plans was evaluated. Methods: Transmission measurements were conducted in narrow-beam geometry using a standard Farmer ionization chamber. Two attenuating materialsmore » and two build -up caps, having different atomic numbers, served to enhance discrimination between absorption of low and high-energy portions of the spectra, thus improving the accuracy of the results. The data was analyzed using a regularization technique implemented through spreadsheet-based calculations. Results: The unfolded spectra were found to deviate from the TPS beam models. The effect of such deviations on treatment planning was evaluated for patient plans through dose distribution calculations with either TPS modeled or measured energy spectra. The differences were reviewed through comparison of isodose distributions, and quantified based on maximum dose values for critical structures. While in most cases no drastic differences in the calculated doses were observed, plans with deviations of 4 to 8% in the maximum dose values for critical structures were discovered. The anatomical sites with large scatter contributions are the most vulnerable to inaccuracies in the modeled spectrum. Conclusion: An independent check of the TPS model spectrum is highly desirable and should be included as part of commissioning of a new Linac. The effect is particularly important for dose calculations in high heterogeneity regions. The developed approach makes acquisition of megavoltage Linac beam spectra achievable in a typical radiation oncology clinic.« less

  2. Gambling as a teaching aid in the introductory physics laboratory

    NASA Astrophysics Data System (ADS)

    Horodynski-Matsushigue, L. B.; Pascholati, P. R.; Vanin, V. R.; Dias, J. F.; Yoneama, M.-L.; Siqueira, P. T. D.; Amaku, M.; Duarte, J. L. M.

    1998-07-01

    Dice throwing is used to illustrate relevant concepts of the statistical theory of uncertainties, in particular the meaning of a limiting distribution, the standard deviation, and the standard deviation of the mean. It is an important part in a sequence of especially programmed laboratory activities, developed for freshmen, at the Institute of Physics of the University of São Paulo. It is shown how this activity is employed within a constructive teaching approach, which aims at a growing understanding of the measuring processes and of the fundamentals of correct statistical handling of experimental data.

  3. Electron acceleration by an obliquely propagating electromagnetic wave in the regime of validity of the Fokker-Planck-Kolmogorov approach

    NASA Technical Reports Server (NTRS)

    Hizanidis, Kyriakos; Vlahos, L.; Polymilis, C.

    1989-01-01

    The relativistic motion of an ensemble of electrons in an intense monochromatic electromagnetic wave propagating obliquely in a uniform external magnetic field is studied. The problem is formulated from the viewpoint of Hamiltonian theory and the Fokker-Planck-Kolmogorov approach analyzed by Hizanidis (1989), leading to a one-dimensional diffusive acceleration along paths of constant zeroth-order generalized Hamiltonian. For values of the wave amplitude and the propagating angle inside the analytically predicted stochastic region, the numerical results suggest that the diffusion probes proceeds in stages. In the first stage, the electrons are accelerated to relatively high energies by sampling the first few overlapping resonances one by one. During that stage, the ensemble-average square deviation of the variable involved scales quadratically with time. During the second stage, they scale linearly with time. For much longer times, deviation from linear scaling slowly sets in.

  4. Application of an improved spectral decomposition method to examine earthquake source scaling in Southern California

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel T.; Shearer, Peter M.

    2017-04-01

    Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.

  5. Paraboloid magnetospheric magnetic field model and the status of the model as an ISO standard

    NASA Astrophysics Data System (ADS)

    Alexeev, I.

    A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions The empirical model developed by Tsyganenko T96 is constructed by minimizing the rms deviation from the large magnetospheric data base The applicability of the T96 model is limited mainly by quiet conditions in the solar wind along the Earth orbit But contrary to the internal planet s field the external magnetospheric magnetic field sources are much more time-dependent A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions It is a reason why the method of the paraboloid magnetospheric model construction based on the more accurate and physically consistent approach in which each source of the magnetic field would have its own relaxation timescale and a driving function based on an individual best fit combination of the solar wind and IMF parameters Such approach is based on a priori information about the global magnetospheric current systems structure Each current system is included as a separate block module in the magnetospheric model As it was shown by the spacecraft magnetometer data there are three current systems which are the main contributors to the external magnetospheric magnetic field magnetopause currents ring current and tail current sheet Paraboloid model is based on an analytical solution of the Laplace equation for each of these large-scale current systems in the magnetosphere with a

  6. Adiabatic sweep pulses for earth's field NMR with a surface coil

    NASA Astrophysics Data System (ADS)

    Conradi, Mark S.; Altobelli, Stephen A.; Sowko, Nicholas J.; Conradi, Susan H.; Fukushima, Eiichi

    2018-03-01

    Adiabatic NMR sweep pulses are described for inversion and excitation in very low magnetic fields B0 and with broad distribution of excitation field amplitude B1. Two aspects distinguish the low field case: (1) when B1 is comparable to or greater than B0, the rotating field approximation fails and (2) inversion sweeps cannot extend to values well below the Larmor frequency because they would approach or pass through zero frequency. Three approaches to inversion are described. The first is a conventional tangent frequency sweep down to the Larmor frequency, a 180° phase shift, and a sweep back up to the starting frequency. The other two are combined frequency and amplitude sweeps covering a narrower frequency range; one is a symmetric sweep from above to below the Larmor frequency and the other uses a smooth decrease of B1 immediately before and after the 180° phase shift. These two AM/FM sweeps show excellent inversion efficiencies over a wide range of B1, a factor of 30 or more. We also demonstrate an excitation sweep that works well in the presence of the same wide range of B1. We show that the primary effect of the counter-rotating field (i.e., at low B0) is that the magnetization suffers large, periodic deviations from where it would be at large B0. Thus, successful sweep pulses must avoid any sharp features in the amplitude, phase, or frequency.

  7. A method for age-matched OCT angiography deviation mapping in the assessment of disease- related changes to the radial peripapillary capillaries.

    PubMed

    Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y

    2018-01-01

    To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (p<0.01). Average healthy LPV density was 21.5±3.07%. Linear regression models indicated that LPV density decreased with age, however ANOVA and pairwise Tukey-Kramer tests did not reach statistical significance. Deviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.

  8. Does standard deviation matter? Using "standard deviation" to quantify security of multistage testing.

    PubMed

    Wang, Chun; Zheng, Yi; Chang, Hua-Hua

    2014-01-01

    With the advent of web-based technology, online testing is becoming a mainstream mode in large-scale educational assessments. Most online tests are administered continuously in a testing window, which may post test security problems because examinees who take the test earlier may share information with those who take the test later. Researchers have proposed various statistical indices to assess the test security, and one most often used index is the average test-overlap rate, which was further generalized to the item pooling index (Chang & Zhang, 2002, 2003). These indices, however, are all defined as the means (that is, the expected proportion of common items among examinees) and they were originally proposed for computerized adaptive testing (CAT). Recently, multistage testing (MST) has become a popular alternative to CAT. The unique features of MST make it important to report not only the mean, but also the standard deviation (SD) of test overlap rate, as we advocate in this paper. The standard deviation of test overlap rate adds important information to the test security profile, because for the same mean, a large SD reflects that certain groups of examinees share more common items than other groups. In this study, we analytically derived the lower bounds of the SD under MST, with the results under CAT as a benchmark. It is shown that when the mean overlap rate is the same between MST and CAT, the SD of test overlap tends to be larger in MST. A simulation study was conducted to provide empirical evidence. We also compared the security of MST under the single-pool versus the multiple-pool designs; both analytical and simulation studies show that the non-overlapping multiple-pool design will slightly increase the security risk.

  9. Motor equivalence during multi-finger accurate force production

    PubMed Central

    Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2014-01-01

    We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311

  10. Two large earthquakes in western Switzerland in the sixteenth century: 1524 in Ardon (VS) and 1584 in Aigle (VD)

    NASA Astrophysics Data System (ADS)

    Schwarz-Zanetti, Gabriela; Fäh, Donat; Gache, Sylvain; Kästli, Philipp; Loizeau, Jeanluc; Masciadri, Virgilio; Zenhäusern, Gregor

    2018-03-01

    The Valais is the most seismically active region of Switzerland. Strong damaging events occurred in 1755, 1855, and 1946. Based on historical documents, we discuss two known damaging events in the sixteenth century: the 1524 Ardon and the 1584 Aigle earthquakes. For the 1524, a document describes damage in Ardon, Plan-Conthey, and Savièse, and a stone tablet at the new bell tower of the Ardon church confirms the reconstruction of the bell tower after the earthquake. Additionally, a significant construction activity in the Upper Valais churches during the second quarter of the sixteenth century is discussed that however cannot be clearly related to this event. The assessed moment magnitude Mw of the 1524 event is 5.8, with an error of about 0.5 units corresponding to one standard deviation. The epicenter is at 46.27 N, 7.27 E with a high uncertainty of about 50 km corresponding to one standard deviation. The assessed moment magnitude Mw of the 1584 main shock is 5.9, with an error of about 0.25 units corresponding to one standard deviation. The epicenter is at 46.33 N and 6.97 E with an uncertainty of about 25 km corresponding to one standard deviation. Exceptional movements in the Lake Geneva wreaked havoc along the shore of the Rhone delta. The large dimension of the induced damage can be explained by an expanded subaquatic slide with resultant tsunami and seiche in Lake Geneva. The strongest of the aftershocks occurred on March 14 with magnitude 5.4 and triggered a destructive landslide covering the villages Corbeyrier and Yvorne, VD.

  11. Middle school transition and body weight outcomes: Evidence from Arkansas Public Schoolchildren.

    PubMed

    Zeng, Di; Thomsen, Michael R; Nayga, Rodolfo M; Rouse, Heather L

    2016-05-01

    There is evidence that middle school transition adversely affects educational and psychological outcomes of pre-teen children, but little is known about the impacts of middle school transition on other aspects of health. In this article, we estimate the impact of middle school transition on the body mass index (BMI) of public schoolchildren in Arkansas, United States. Using an instrumental variable approach, we find that middle school transition in grade 6 led to a moderate decrease of 0.04 standard deviations in BMI z-scores for all students. Analysis by subsample indicated that this result was driven by boys (0.06-0.07 standard deviations) and especially by non-minority boys (0.09 standard deviations). We speculate that the changing levels of physical activities associated with middle school transition provide the most reasonable explanation for this result. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Deviation Value for Conventional X-ray in Hospitals in South Sulawesi Province from 2014 to 2016

    NASA Astrophysics Data System (ADS)

    Bachtiar, Ilham; Abdullah, Bualkar; Tahir, Dahlan

    2018-03-01

    This paper describes the conventional X-ray machine parameters tested in the region of South Sulawesi from 2014 to 2016. The objective of this research is to know deviation of every parameter of conventional X-ray machine. The testing parameters were analyzed by using quantitative methods with participatory observational approach. Data collection was performed by testing the output of conventional X-ray plane using non-invasive x-ray multimeter. The test parameters include tube voltage (kV) accuracy, radiation output linearity, reproducibility and radiation beam value (HVL) quality. The results of the analysis show four conventional X-ray test parameters have varying deviation spans, where the tube voltage (kV) accuracy has an average value of 4.12%, the average radiation output linearity is 4.47% of the average reproducibility of 0.62% and the averaged of the radiation beam (HVL) is 3.00 mm.

  13. MUSiC - Model-independent search for deviations from Standard Model predictions in CMS

    NASA Astrophysics Data System (ADS)

    Pieta, Holger

    2010-02-01

    We present an approach for a model independent search in CMS. Systematically scanning the data for deviations from the standard model Monte Carlo expectations, such an analysis can help to understand the detector and tune event generators. By minimizing the theoretical bias the analysis is furthermore sensitive to a wide range of models for new physics, including the uncounted number of models not-yet-thought-of. After sorting the events into classes defined by their particle content (leptons, photons, jets and missing transverse energy), a minimally prejudiced scan is performed on a number of distributions. Advanced statistical methods are used to determine the significance of the deviating regions, rigorously taking systematic uncertainties into account. A number of benchmark scenarios, including common models of new physics and possible detector effects, have been used to gauge the power of such a method. )

  14. SU-F-J-29: Dosimetric Effect of Image Registration ROI Size and Focus in Automated CBCT Registration for Spine SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnelli, A; Smith, A; Chao, S

    2016-06-15

    Purpose: Spinal stereotactic body radiotherapy (SBRT) involves highly conformal dose distributions and steep dose gradients due to the proximity of the spinal cord to the treatment volume. To achieve the planning goals while limiting the spinal cord dose, patients are setup using kV cone-beam CT (kV-CBCT) with 6 degree corrections. The kV-CBCT registration with the reference CT is dependent on a user selected region of interest (ROI). The objective of this work is to determine the dosimetric impact of ROI selection. Methods: Twenty patients were selected for this study. For each patient, the kV-CBCT was registered to the reference CTmore » using three ROIs including: 1) the external body, 2) a large anatomic region, and 3) a small region focused in the target volume. Following each registration, the aligned CBCTs and contours were input to the treatment planning system for dose evaluation. The minimum dose, dose to 99% and 90% of the tumor volume (D99%, D90%), dose to 0.03cc and the dose to 10% of the spinal cord subvolume (V10Gy) were compared to the planned values. Results: The average deviations in the tumor minimum dose were 2.68%±1.7%, 4.6%±4.0%, 14.82%±9.9% for small, large and the external ROIs, respectively. The average deviations in tumor D99% were 1.15%±0.7%, 3.18%±1.7%, 10.0%±6.6%, respectively. The average deviations in tumor D90% were 1.00%±0.96%, 1.14%±1.05%, 3.19%±4.77% respectively. The average deviations in the maximum dose to the spinal cord were 2.80%±2.56%, 7.58%±8.28%, 13.35%±13.14%, respectively. The average deviation in V10Gy to the spinal cord were 1.69%±0.88%, 1.98%±2.79%, 2.71%±5.63%. Conclusion: When using automated registration algorithms for CBCT-Reference alignment, a small target-focused ROI results in the least dosimetric deviation from the plan. It is recommended to focus narrowly on the target volume to keep the spinal cord dose below tolerance.« less

  15. Motion-robust intensity-modulated proton therapy for distal esophageal cancer.

    PubMed

    Yu, Jen; Zhang, Xiaodong; Liao, Li; Li, Heng; Zhu, Ronald; Park, Peter C; Sahoo, Narayan; Gillin, Michael; Li, Yupeng; Chang, Joe Y; Komaki, Ritsuko; Lin, Steven H

    2016-03-01

    To develop methods for evaluation and mitigation of dosimetric impact due to respiratory and diaphragmatic motion during free breathing in treatment of distal esophageal cancers using intensity-modulated proton therapy (IMPT). This was a retrospective study on 11 patients with distal esophageal cancer. For each patient, four-dimensional computed tomography (4D CT) data were acquired, and a nominal dose was calculated on the average phase of the 4D CT. The changes of water equivalent thickness (ΔWET) to cover the treatment volume from the peak of inspiration to the valley of expiration were calculated for a full range of beam angle rotation. Two IMPT plans were calculated: one at beam angles corresponding to small ΔWET and one at beam angles corresponding to large ΔWET. Four patients were selected for the calculation of 4D-robustness-optimized IMPT plans due to large motion-induced dose errors generated in conventional IMPT. To quantitatively evaluate motion-induced dose deviation, the authors calculated the lowest dose received by 95% (D95) of the internal clinical target volume for the nominal dose, the D95 calculated on the maximum inhale and exhale phases of 4D CT DCT0 andDCT50 , the 4D composite dose, and the 4D dynamic dose for a single fraction. The dose deviation increased with the average ΔWET of the implemented beams, ΔWETave. When ΔWETave was less than 5 mm, the dose error was less than 1 cobalt gray equivalent based on DCT0 and DCT50 . The dose deviation determined on the basis of DCT0 and DCT50 was proportionally larger than that determined on the basis of the 4D composite dose. The 4D-robustness-optimized IMPT plans notably reduced the overall dose deviation of multiple fractions and the dose deviation caused by the interplay effect in a single fraction. In IMPT for distal esophageal cancer, ΔWET analysis can be used to select the beam angles that are least affected by respiratory and diaphragmatic motion. To further reduce dose deviation, the 4D-robustness optimization can be implemented for IMPT planning. Calculation of DCT0 and DCT50 is a conservative method to estimate the motion-induced dose errors.

  16. Geometric Verification of Dynamic Wave Arc Delivery With the Vero System Using Orthogonal X-ray Fluoroscopic Imaging.

    PubMed

    Burghelea, Manuela; Verellen, Dirk; Poels, Kenneth; Gevaert, Thierry; Depuydt, Tom; Tournel, Koen; Hung, Cecilia; Simon, Viorica; Hiraoka, Masahiro; de Ridder, Mark

    2015-07-15

    The purpose of this study was to define an independent verification method based on on-board orthogonal fluoroscopy to determine the geometric accuracy of synchronized gantry-ring (G/R) rotations during dynamic wave arc (DWA) delivery available on the Vero system. A verification method for DWA was developed to calculate O-ring-gantry (G/R) positional information from ball-bearing positions retrieved from fluoroscopic images of a cubic phantom acquired during DWA delivery. Different noncoplanar trajectories were generated in order to investigate the influence of path complexity on delivery accuracy. The G/R positions detected from the fluoroscopy images (DetPositions) were benchmarked against the G/R angulations retrieved from the control points (CP) of the DWA RT plan and the DWA log files recorded by the treatment console during DWA delivery (LogActed). The G/R rotational accuracy was quantified as the mean absolute deviation ± standard deviation. The maximum G/R absolute deviation was calculated as the maximum 3-dimensional distance between the CP and the closest DetPositions. In the CP versus DetPositions comparison, an overall mean G/R deviation of 0.13°/0.16° ± 0.16°/0.16° was obtained, with a maximum G/R deviation of 0.6°/0.2°. For the LogActed versus DetPositions evaluation, the overall mean deviation was 0.08°/0.15° ± 0.10°/0.10° with a maximum G/R of 0.3°/0.4°. The largest decoupled deviations registered for gantry and ring were 0.6° and 0.4° respectively. No directional dependence was observed between clockwise and counterclockwise rotations. Doubling the dose resulted in a double number of detected points around each CP, and an angular deviation reduction in all cases. An independent geometric quality assurance approach was developed for DWA delivery verification and was successfully applied on diverse trajectories. Results showed that the Vero system is capable of following complex G/R trajectories with maximum deviations during DWA below 0.6°. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.

    PubMed

    Yang, Harry; Novick, Steven; Burdick, Richard K

    Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.

  18. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Optimization of hybrid power system composed of SMES and flywheel MG for large pulsed load

    NASA Astrophysics Data System (ADS)

    Niiyama, K.; Yagai, T.; Tsuda, M.; Hamajima, T.

    2008-09-01

    A superconducting magnetic storage system (SMES) has some advantages such as rapid large power response and high storage efficiency which are superior to other energy storage systems. A flywheel motor generator (FWMG) has large scaled capacity and high reliability, and hence is broadly utilized for a large pulsed load, while it has comparatively low storage efficiency due to high mechanical loss compared with SMES. A fusion power plant such as International Thermo-Nuclear Experimental Reactor (ITER) requires a large and long pulsed load which causes a frequency deviation in a utility power system. In order to keep the frequency within an allowable deviation, we propose a hybrid power system for the pulsed load, which equips the SMES and the FWMG with the utility power system. We evaluate installation cost and frequency control performance of three power systems combined with energy storage devices; (i) SMES with the utility power, (ii) FWMG with the utility power, (iii) both SMES and FWMG with the utility power. The first power system has excellent frequency power control performance but its installation cost is high. The second system has inferior frequency control performance but its installation cost is the lowest. The third system has good frequency control performance and its installation cost is attained lower than the first power system by adjusting the ratio between SMES and FWMG.

  20. Center of pressure based segment inertial parameters validation

    PubMed Central

    Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane

    2017-01-01

    By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090

  1. A unique approach to demonstrating that apical bud temperature specifically determines leaf initiation rate in the dicot Cucumis sativus.

    PubMed

    Savvides, Andreas; Dieleman, Janneke A; van Ieperen, Wim; Marcelis, Leo F M

    2016-04-01

    Leaf initiation rate is largely determined by the apical bud temperature even when apical bud temperature largely deviates from the temperature of other plant organs. We have long known that the rate of leaf initiation (LIR) is highly sensitive to temperature, but previous studies in dicots have not rigorously demonstrated that apical bud temperature controls LIR independent of other plant organs temperature. Many models assume that apical bud and leaf temperature are the same. In some environments, the temperature of the apical bud, where leaf initiation occurs, may differ by several degrees Celsius from the temperature of other plant organs. In a 28-days study, we maintained temperature differences between the apical bud and the rest of the individual Cucumis sativus plants from -7 to +8 °C by enclosing the apical buds in transparent, temperature-controlled, flow-through, spheres. Our results demonstrate that LIR was completely determined by apical bud temperature independent of other plant organs temperature. These results emphasize the need to measure or model apical bud temperatures in dicots to improve the prediction of crop development rates in simulation models.

  2. Combinatorial approach toward high-throughput analysis of direct methanol fuel cells.

    PubMed

    Jiang, Rongzhong; Rong, Charles; Chu, Deryn

    2005-01-01

    A 40-member array of direct methanol fuel cells (with stationary fuel and convective air supplies) was generated by electrically connecting the fuel cells in series. High-throughput analysis of these fuel cells was realized by fast screening of voltages between the two terminals of a fuel cell at constant current discharge. A large number of voltage-current curves (200) were obtained by screening the voltages through multiple small-current steps. Gaussian distribution was used to statistically analyze the large number of experimental data. The standard deviation (sigma) of voltages of these fuel cells increased linearly with discharge current. The voltage-current curves at various fuel concentrations were simulated with an empirical equation of voltage versus current and a linear equation of sigma versus current. The simulated voltage-current curves fitted the experimental data well. With increasing methanol concentration from 0.5 to 4.0 M, the Tafel slope of the voltage-current curves (at sigma=0.0), changed from 28 to 91 mV.dec-1, the cell resistance from 2.91 to 0.18 Omega, and the power output from 3 to 18 mW.cm-2.

  3. Performance evaluation of an importance sampling technique in a Jackson network

    NASA Astrophysics Data System (ADS)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  4. Highly Sensitive, Uniform, and Reproducible Surface-Enhanced Raman Spectroscopy Substrate with Nanometer-Scale Quasi-periodic Nanostructures.

    PubMed

    Jin, Yuanhao; Wang, Yingcheng; Chen, Mo; Xiao, Xiaoyang; Zhang, Tianfu; Wang, Jiaping; Jiang, Kaili; Fan, Shoushan; Li, Qunqing

    2017-09-20

    We introduce a simple and cost-effective approach for fabrication of effective surface-enhanced Raman spectroscopy (SERS) substrates. It is shown that the as-fabricated substrates show excellent SERS effects in various probe molecules with high sensitivity, that is, picomolar level detection, and also good reliability. With a SERS enhancement factor beyond 10 8 and excellent reproducibility (deviation less than 5%) of signal intensity, the fabrication of the SERS substrate is realized on a four-inch wafer and proven to be effective in pesticide residue detection. The SERS substrate is realized first through the fabrication of quasi-periodic nanostructured silicon with dimension features in tens of nanometers using superaligned carbon nanotubes networks as an etching mask, after which a large amount of hot spots with nanometer gaps are formed through deposition of a gold film. With rigorous nanostructure design, the enhanced performance of electromagnetic field distribution for nanostructures is optimized. With the advantage of cost-effective large-area preparation, it is believed that the as-fabricated SERS substrate could be used in a wide variety of actual applications where detection of trace amounts is necessary.

  5. Off-design Performance Analysis of Multi-Stage Transonic Axial Compressors

    NASA Astrophysics Data System (ADS)

    Du, W. H.; Wu, H.; Zhang, L.

    Because of the complex flow fields and component interaction in modern gas turbine engines, they require extensive experiment to validate performance and stability. The experiment process can become expensive and complex. Modeling and simulation of gas turbine engines are way to reduce experiment costs, provide fidelity and enhance the quality of essential experiment. The flow field of a transonic compressor contains all the flow aspects, which are difficult to present-boundary layer transition and separation, shock-boundary layer interactions, and large flow unsteadiness. Accurate transonic axial compressor off-design performance prediction is especially difficult, due in large part to three-dimensional blade design and the resulting flow field. Although recent advancements in computer capacity have brought computational fluid dynamics to forefront of turbomachinery design and analysis, the grid and turbulence model still limit Reynolds-average Navier-Stokes (RANS) approximations in the multi-stage transonic axial compressor flow field. Streamline curvature methods are still the dominant numerical approach as an important tool for turbomachinery to analyze and design, and it is generally accepted that streamline curvature solution techniques will provide satisfactory flow prediction as long as the losses, deviation and blockage are accurately predicted.

  6. Development of a coding form for approach control/pilot voice communications.

    DOT National Transportation Integrated Search

    1995-05-01

    The Aviation Topics Speech Acts Taxonomy (ATSAT) is a tool for categorizing pilot/controller communications according to their purpose and for classifying communication errors. Air traffic controller communications that deviate from FAA Air Traffic C...

  7. Effects of expected-value information and display format on recognition of aircraft subsystem abnormalities

    NASA Technical Reports Server (NTRS)

    Palmer, Michael T.; Abbott, Kathy H.

    1994-01-01

    This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.

  8. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories

    NASA Astrophysics Data System (ADS)

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  9. Kinematic gait deficits at the trunk and pelvis: characteristic features in children with hereditary spastic paraplegia.

    PubMed

    Adair, Brooke; Rodda, Jillian; McGinley, Jennifer L; Graham, H Kerr; Morris, Meg E

    2016-08-01

    To examine the kinematic gait deviations at the trunk and pelvis of children with hereditary spastic paraplegia (HSP). This exploratory observational study quantified gait kinematics for the trunk and pelvis from 11 children with HSP (7 males, 4 females) using the Gait Profile Score and Gait Variable Scores (GVS), and compared the kinematics to data from children with typical development using a Mann-Whitney U test. Children with HSP (median age 11y 4mo, interquartile range 4y) demonstrated large deviations in the GVS for the trunk and pelvis in the sagittal and coronal planes when compared to the gait patterns of children with typical development (p=0.010-0.020). Specific deviations included increased range of movement for the trunk in the coronal plane and increased excursion of the trunk and pelvis in the sagittal plane. In the transverse plane, children with HSP demonstrated later peaks in posterior pelvic rotation. The kinematic gait deviations identified in this study raise questions about the contribution of muscle weakness in HSP. Further research is warranted to determine contributing factors for gait dysfunction in HSP, especially the relative influence of spasticity and weakness. © 2016 Mac Keith Press.

  10. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories.

    PubMed

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  11. Psychometric analysis of the Generalized Anxiety Disorder scale (GAD-7) in primary care using modern item response theory.

    PubMed

    Jordan, Pascal; Shedden-Mora, Meike C; Löwe, Bernd

    2017-01-01

    The Generalized Anxiety Disorder scale (GAD-7) is one of the most frequently used diagnostic self-report scales for screening, diagnosis and severity assessment of anxiety disorder. Its psychometric properties from the view of the Item Response Theory paradigm have rarely been investigated. We aimed to close this gap by analyzing the GAD-7 within a large sample of primary care patients with respect to its psychometric properties and its implications for scoring using Item Response Theory. Robust, nonparametric statistics were used to check unidimensionality of the GAD-7. A graded response model was fitted using a Bayesian approach. The model fit was evaluated using posterior predictive p-values, item information functions were derived and optimal predictions of anxiety were calculated. The sample included N = 3404 primary care patients (60% female; mean age, 52,2; standard deviation 19.2) The analysis indicated no deviations of the GAD-7 scale from unidimensionality and a decent fit of a graded response model. The commonly suggested ultra-brief measure consisting of the first two items, the GAD-2, was supported by item information analysis. The first four items discriminated better than the last three items with respect to latent anxiety. The information provided by the first four items should be weighted more heavily. Moreover, estimates corresponding to low to moderate levels of anxiety show greater variability. The psychometric validity of the GAD-2 was supported by our analysis.

  12. Psychometric analysis of the Generalized Anxiety Disorder scale (GAD-7) in primary care using modern item response theory

    PubMed Central

    Shedden-Mora, Meike C.; Löwe, Bernd

    2017-01-01

    Objective The Generalized Anxiety Disorder scale (GAD-7) is one of the most frequently used diagnostic self-report scales for screening, diagnosis and severity assessment of anxiety disorder. Its psychometric properties from the view of the Item Response Theory paradigm have rarely been investigated. We aimed to close this gap by analyzing the GAD-7 within a large sample of primary care patients with respect to its psychometric properties and its implications for scoring using Item Response Theory. Methods Robust, nonparametric statistics were used to check unidimensionality of the GAD-7. A graded response model was fitted using a Bayesian approach. The model fit was evaluated using posterior predictive p-values, item information functions were derived and optimal predictions of anxiety were calculated. Results The sample included N = 3404 primary care patients (60% female; mean age, 52,2; standard deviation 19.2) The analysis indicated no deviations of the GAD-7 scale from unidimensionality and a decent fit of a graded response model. The commonly suggested ultra-brief measure consisting of the first two items, the GAD-2, was supported by item information analysis. The first four items discriminated better than the last three items with respect to latent anxiety. Conclusion The information provided by the first four items should be weighted more heavily. Moreover, estimates corresponding to low to moderate levels of anxiety show greater variability. The psychometric validity of the GAD-2 was supported by our analysis. PMID:28771530

  13. Embedded Vision Sensor Network for Planogram Maintenance in Retail Environments.

    PubMed

    Frontoni, Emanuele; Mancini, Adriano; Zingaretti, Primo

    2015-08-27

    A planogram is a detailed visual map that establishes the position of the products in a retail store. It is designed to supply the best location of a product for suppliers to support an innovative merchandising approach, to increase sales and profits and to better manage the shelves. Deviating from the planogram defeats the purpose of any of these goals, and maintaining the integrity of the planogram becomes a fundamental aspect in retail operations. We propose an embedded system, mainly based on a smart camera, able to detect and to investigate the most important parameters in a retail store by identifying the differences with respect to an "approved" planogram. We propose a new solution that allows concentrating all the surveys and the useful measures on a limited number of devices in communication among them. These devices are simple, low cost and ready for immediate installation, providing an affordable and scalable solution to the problem of planogram maintenance. Moreover, over an Internet of Things (IoT) cloud-based architecture, the system supplies many additional data that are not concerning the planogram, e.g., out-of-shelf events, promptly notified through SMS and/or mail. The application of this project allows the realization of highly integrated systems, which are economical, complete and easy to use for a large number of users. Experimental results have proven that the system can efficiently calculate the deviation from a normal situation by comparing the base planogram image with the images grabbed.

  14. Embedded Vision Sensor Network for Planogram Maintenance in Retail Environments

    PubMed Central

    Frontoni, Emanuele; Mancini, Adriano; Zingaretti, Primo

    2015-01-01

    A planogram is a detailed visual map that establishes the position of the products in a retail store. It is designed to supply the best location of a product for suppliers to support an innovative merchandising approach, to increase sales and profits and to better manage the shelves. Deviating from the planogram defeats the purpose of any of these goals, and maintaining the integrity of the planogram becomes a fundamental aspect in retail operations. We propose an embedded system, mainly based on a smart camera, able to detect and to investigate the most important parameters in a retail store by identifying the differences with respect to an “approved” planogram. We propose a new solution that allows concentrating all the surveys and the useful measures on a limited number of devices in communication among them. These devices are simple, low cost and ready for immediate installation, providing an affordable and scalable solution to the problem of planogram maintenance. Moreover, over an Internet of Things (IoT) cloud-based architecture, the system supplies many additional data that are not concerning the planogram, e.g., out-of-shelf events, promptly notified through SMS and/or mail. The application of this project allows the realization of highly integrated systems, which are economical, complete and easy to use for a large number of users. Experimental results have proven that the system can efficiently calculate the deviation from a normal situation by comparing the base planogram image with the images grabbed. PMID:26343659

  15. Graphite Screen-Printed Electrodes Applied for the Accurate and Reagentless Sensing of pH.

    PubMed

    Galdino, Flávia E; Smith, Jamie P; Kwamou, Sophie I; Kampouris, Dimitrios K; Iniesta, Jesus; Smith, Graham C; Bonacin, Juliano A; Banks, Craig E

    2015-12-01

    A reagentless pH sensor based upon disposable and economical graphite screen-printed electrodes (GSPEs) is demonstrated for the first time. The voltammetric pH sensor utilizes GSPEs which are chemically pretreated to form surface immobilized oxygenated species that, when their redox behavior is monitored, give a Nernstian response over a large pH range (1-13). An excellent experimental correlation is observed between the voltammetric potential and pH over the entire pH range of 1-13 providing a simple approach with which to monitor solution pH. Such a linear response over this dynamic pH range is not usually expected but rather deviation from linearity is encountered at alkaline pH values; absence of this has previously been attributed to a change in the pKa value of surface immobilized groups from that of solution phase species. This non-deviation, which is observed here in the case of our facile produced reagentless pH sensor and also reported in the literature for pH sensitive compounds immobilized upon carbon electrodes/surfaces, where a linear response is observed over the entire pH range, is explained alternatively for the first time. The performance of the GSPE pH sensor is also directly compared with a glass pH probe and applied to the measurement of pH in "real" unbuffered samples where an excellent correlation between the two protocols is observed validating the proposed GSPE pH sensor.

  16. [3D-imaging and analysis for plastic surgery by smartphone and tablet: an alternative to professional systems?].

    PubMed

    Koban, K C; Leitsch, S; Holzbach, T; Volkmer, E; Metz, P M; Giunta, R E

    2014-04-01

    A new approach of using photographs from smartphones for three-dimensional (3D) imaging was introduced besides the standard high quality 3D camera systems. In this work, we investigated different capture preferences and compared the accuracy of this 3D reconstruction method with manual tape measurement and an established commercial 3D camera system. The facial region of one plastic mannequin head was labelled with 21 landmarks. A 3D reference model was captured with the Vectra 3D Imaging System®. In addition, 3D imaging was executed with the Autodesk 123d Catch® application using 16, 12, 9, 6 and 3 pictures from Apple® iPhone 4 s® and iPad® 3rd generation. The accuracy of 3D reconstruction was measured in 2 steps. First, 42 distance measurements from manual tape measurement and the 2 digital systems were compared. Second, the surface-to-surface deviation of different aesthetic units from the Vectra® reference model to Catch® generated models was analysed. For each 3D system the capturing and processing time was measured. The measurement showed no significant (p>0.05) difference between manual tape measurement and both digital distances from the Catch® application and Vectra®. Surface-to-surface deviation to the Vectra® reference model showed sufficient results for the 3D reconstruction of Catch® with 16, 12 and 9 picture sets. Use of 6 and 3 pictures resulted in large deviations. Lateral aesthetic units showed higher deviations than central units. Catch® needed 5 times longer to capture and compute 3D models (average 10 min vs. 2 min). The Autodesk 123d Catch® computed models suggests good accuracy of the 3D reconstruction for a standard mannequin model, in comparison to manual tape measurement and the surface-to-surface analysis with a 3D reference model. However, the prolonged capture time with multiple pictures is prone to errors. Further studies are needed to investigate its application and quality in capturing volunteer models. Soon mobile applications may offer an alternative for plastic surgeons to today's cost intensive, stationary 3D camera systems. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Analysis of measurement deviations for the patient-specific quality assurance using intensity-modulated spot-scanning particle beams

    NASA Astrophysics Data System (ADS)

    Li, Yongqiang; Hsi, Wen C.

    2017-04-01

    To analyze measurement deviations of patient-specific quality assurance (QA) using intensity-modulated spot-scanning particle beams, a commercial radiation dosimeter using 24 pinpoint ionization chambers was utilized. Before the clinical trial, validations of the radiation dosimeter and treatment planning system were conducted. During the clinical trial 165 measurements were performed on 36 enrolled patients. Two or three fields of particle beam were used for each patient. Measurements were typically performed with the dosimeter placed at special regions of dose distribution along depth and lateral profiles. In order to investigate the dosimeter accuracy, repeated measurements with uniform dose irradiations were also carried out. A two-step approach was proposed to analyze 24 sampling points over a 3D treatment volume. The mean value and the standard deviation of each measurement did not exceed 5% for all measurements performed on patients with various diseases. According to the defined intervention thresholds of mean deviation and the distance-to-agreement concept with a Gamma index analysis using criteria of 3.0% and 2 mm, a decision could be made regarding whether the dose distribution was acceptable for the patient. Based measurement results, deviation analysis was carried out. In this study, the dosimeter was used for dose verification and provided a safety guard to assure precise dose delivery of highly modulated particle therapy. Patient-specific QA will be investigated in future clinical operations.

  18. Odds per Adjusted Standard Deviation: Comparing Strengths of Associations for Risk Factors Measured on Different Scales and Across Diseases and Populations

    PubMed Central

    Hopper, John L.

    2015-01-01

    How can the “strengths” of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors—and that is how risk gradients are interpreted—so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RRs. This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. PMID:26520360

  19. Design and Evaluation of a Dynamic Programming Flight Routing Algorithm Using the Convective Weather Avoidance Model

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Grabbe, Shon; Mukherjee, Avijit

    2010-01-01

    The optimization of traffic flows in congested airspace with varying convective weather is a challenging problem. One approach is to generate shortest routes between origins and destinations while meeting airspace capacity constraint in the presence of uncertainties, such as weather and airspace demand. This study focuses on development of an optimal flight path search algorithm that optimizes national airspace system throughput and efficiency in the presence of uncertainties. The algorithm is based on dynamic programming and utilizes the predicted probability that an aircraft will deviate around convective weather. It is shown that the running time of the algorithm increases linearly with the total number of links between all stages. The optimal routes minimize a combination of fuel cost and expected cost of route deviation due to convective weather. They are considered as alternatives to the set of coded departure routes which are predefined by FAA to reroute pre-departure flights around weather or air traffic constraints. A formula, which calculates predicted probability of deviation from a given flight path, is also derived. The predicted probability of deviation is calculated for all path candidates. Routes with the best probability are selected as optimal. The predicted probability of deviation serves as a computable measure of reliability in pre-departure rerouting. The algorithm can also be extended to automatically adjust its design parameters to satisfy the desired level of reliability.

  20. A program for automatically predicting supramolecular aggregates and its application to urea and porphin [A programme for the automated geometry prediction of supra-molecular aggregates and its application to the examples of urea and porphin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sachse, Torsten; Martinez, Todd J.; Dietzek, Benjamin

    Not only the molecular structure but also the presence or absence of aggregates determines many properties of organic materials. Theoretical investigation of such aggregates requires the prediction of a suitable set of diverse structures. Here, we present the open–source program EnergyScan for the unbiased prediction of geometrically diverse sets of small aggregates. Its bottom–up approach is complementary to existing ones by performing a detailed scan of an aggregate's potential energy surface, from which diverse local energy minima are selected. We crossvalidate this approach by predicting both literature–known and heretofore unreported geometries of the urea dimer. We also predict a diversemore » set of dimers of the less intensely studied case of porphin, which we investigate further using quantum chemistry. For several dimers, we find strong deviations from a reference absorption spectrum, which we explain using computed transition densities. Furthermore, this proof of principle clearly shows that EnergyScan successfully predicts aggregates exhibiting large structural and spectral diversity.« less

  1. A program for automatically predicting supramolecular aggregates and its application to urea and porphin [A programme for the automated geometry prediction of supra-molecular aggregates and its application to the examples of urea and porphin

    DOE PAGES

    Sachse, Torsten; Martinez, Todd J.; Dietzek, Benjamin; ...

    2018-01-03

    Not only the molecular structure but also the presence or absence of aggregates determines many properties of organic materials. Theoretical investigation of such aggregates requires the prediction of a suitable set of diverse structures. Here, we present the open–source program EnergyScan for the unbiased prediction of geometrically diverse sets of small aggregates. Its bottom–up approach is complementary to existing ones by performing a detailed scan of an aggregate's potential energy surface, from which diverse local energy minima are selected. We crossvalidate this approach by predicting both literature–known and heretofore unreported geometries of the urea dimer. We also predict a diversemore » set of dimers of the less intensely studied case of porphin, which we investigate further using quantum chemistry. For several dimers, we find strong deviations from a reference absorption spectrum, which we explain using computed transition densities. Furthermore, this proof of principle clearly shows that EnergyScan successfully predicts aggregates exhibiting large structural and spectral diversity.« less

  2. Inherent structure versus geometric metric for state space discretization.

    PubMed

    Liu, Hanzhong; Li, Minghai; Fan, Jue; Huo, Shuanghong

    2016-05-30

    Inherent structure (IS) and geometry-based clustering methods are commonly used for analyzing molecular dynamics trajectories. ISs are obtained by minimizing the sampled conformations into local minima on potential/effective energy surface. The conformations that are minimized into the same energy basin belong to one cluster. We investigate the influence of the applications of these two methods of trajectory decomposition on our understanding of the thermodynamics and kinetics of alanine tetrapeptide. We find that at the microcluster level, the IS approach and root-mean-square deviation (RMSD)-based clustering method give totally different results. Depending on the local features of energy landscape, the conformations with close RMSDs can be minimized into different minima, while the conformations with large RMSDs could be minimized into the same basin. However, the relaxation timescales calculated based on the transition matrices built from the microclusters are similar. The discrepancy at the microcluster level leads to different macroclusters. Although the dynamic models established through both clustering methods are validated approximately Markovian, the IS approach seems to give a meaningful state space discretization at the macrocluster level in terms of conformational features and kinetics. © 2016 Wiley Periodicals, Inc.

  3. Neural network approach to quantum-chemistry data: accurate prediction of density functional theory energies.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2009-08-21

    Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6+/-0.2 kcal mol(-1). In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.

  4. Epidemiology, clinical presentation and diagnosis of non-functioning pituitary adenomas.

    PubMed

    Ntali, Georgia; Wass, John A

    2018-04-01

    Non-functioning pituitary adenomas (NFPAs) are benign pituitary neoplasms that do not cause a hormonal hypersecretory syndrome. An improved understanding of their epidemiology, clinical presentation and diagnosis is needed. A literature review was performed using Pubmed to identify research reports and clinical case series on NFPAs. They account for 14-54% of pituitary adenomas and have a prevalence of 7-41.3/100,000 population. Their standardized incidence rate is 0.65-2.34/100,000 and the peak occurence is from the fourth to the eighth decade. The clinical spectrum of NFPAs varies from being completely asymptomatic to causing significant hypothalamic/pituitary dysfunction and visual field compromise due to their large size. Most patients present with symptoms of mass effect, such as headaches, visual field defects, ophthalmoplegias, and hypopituitarism but also hyperprolactinaemia due to pituitary stalk deviation and less frequently pituitary apoplexy. Non-functioning pituitary incidentalomas are found on brain imaging performed for an unrelated reason. Diagnostic approach includes magnetic resonance imaging of the sellar region, laboratory evaluations, screening for hormone hypersecretion and for hypopituitarism, and a visual field examination if the lesion abuts the optic nerves or chiasm. This article reviews the epidemiology, clinical behaviour and diagnostic approach of non-functioning pituitary adenomas.

  5. Shell model description of heavy nuclei and abnormal collective motions

    NASA Astrophysics Data System (ADS)

    Qi, Chong

    2018-05-01

    In this contribution I present systematic calculations on the spectroscopy and electromagnetic transition properties of intermediate-mass and heavy nuclei around 100Sn and 208Pb. We employed the large-scale configuration interaction shell model approach with realistic interactions. Those nuclei are the longest isotopic chains that can be studied by the nuclear shell model. I will show that the yrast spectra of Te isotopes show a vibrational-like equally spaced pattern but the few known E2 transitions show rotational-like behaviour. These kinds of abnormal collective behaviors cannot be reproduced by standard collective models and provide excellent background to study the competition of single-particle and various collective degrees of freedom. Moreover, the calculated B(E2) values for neutron-deficient and heavier Te isotopes show contrasting different behaviours along the yrast line, which may be related to the enhanced neutron-proton correlation when approaching N=50. The deviations between theory and experiment concerning the energies and E2 transition properties of low-lying 0+ and 2+ excited states and isomeric states in those nuclei may provide a constraint on our understanding of nuclear deformation and intruder configuration in that region.

  6. Delayed presentation of congenital diaphragmatic hernia manifesting as combined-type acute gastric volvulus: a case report and review of the literature.

    PubMed

    Anaya-Ayala, Javier E; Naik-Mathuria, Bindi; Olutoye, Oluyinka O

    2008-03-01

    Acute gastric volvulus associated with congenital diaphragmatic hernia is an unusual surgical emergency. We describe a case of an 11-year-old girl who presented with a 4-day history of abdominal pain, nonproductive retching, cough, and shortness of breath. A chest radiograph revealed a large air-fluid level in left hemithorax and the presence of intestinal loops with marked mediastinal deviation. Nasogastric decompression was unsuccessful. Via a thoracoscopic approach, the large fluid-filled stomach was percutaneously decompressed but could not be reduced. Through a left subcostal incision, a left-sided diaphragmatic defect about 4 x 5 cm was encountered. A large portion of small intestines, ascending and transverse colon, strangulated but viable stomach, and a large spleen herniated through the defect. The contents were reduced, revealing a combined gastric volvulus. Once the diaphragmatic defect was repaired primarily, there was insufficient space in the abdominal cavity to contain all the viscera reduced form the chest. Therefore, we placed an AlloDerm patch on the fascia and closed with a wound V.A.C (Kinetic Concepts Inc, San Antonio, TX). Two weeks later, the wound was definitively closed; she recovered uneventfully and was discharged home 3 days later. To our knowledge, only 26 previous cases of acute gastric volvulus complicating a congenital diaphragmatic hernia in children have been reported in the literature. Our patient represents the 27th case and the first combined type acute gastric volvulus case.

  7. Precision insertion of percutaneous sacroiliac screws using a novel augmented reality-based navigation system: a pilot study.

    PubMed

    Wang, Huixiang; Wang, Fang; Leong, Anthony Peng Yew; Xu, Lu; Chen, Xiaojun; Wang, Qiugen

    2016-09-01

    Augmented reality (AR) enables superimposition of virtual images onto the real world. The aim of this study is to present a novel AR-based navigation system for sacroiliac screw insertion and to evaluate its feasibility and accuracy in cadaveric experiments. Six cadavers with intact pelvises were employed in our study. They were CT scanned and the pelvis and vessels were segmented into 3D models. The ideal trajectory of the sacroiliac screw was planned and represented visually as a cylinder. For the intervention, the head mounted display created a real-time AR environment by superimposing the virtual 3D models onto the surgeon's field of view. The screws were drilled into the pelvis as guided by the trajectory represented by the cylinder. Following the intervention, a repeat CT scan was performed to evaluate the accuracy of the system, by assessing the screw positions and the deviations between the planned trajectories and inserted screws. Post-operative CT images showed that all 12 screws were correctly placed with no perforation. The mean deviation between the planned trajectories and the inserted screws was 2.7 ± 1.2 mm at the bony entry point, 3.7 ± 1.1 mm at the screw tip, and the mean angular deviation between the two trajectories was 2.9° ± 1.1°. The mean deviation at the nerve root tunnels region on the sagittal plane was 3.6 ± 1.0 mm. This study suggests an intuitive approach for guiding screw placement by way of AR-based navigation. This approach was feasible and accurate. It may serve as a valuable tool for assisting percutaneous sacroiliac screw insertion in live surgery.

  8. Valuing fire planning alternatives in forest restoration: using derived demand to integrate economics with ecological restoration.

    PubMed

    Rideout, Douglas B; Ziesler, Pamela S; Kernohan, Nicole J

    2014-08-01

    Assessing the value of fire planning alternatives is challenging because fire affects a wide array of ecosystem, market, and social values. Wildland fire management is increasingly used to address forest restoration while pragmatic approaches to assessing the value of fire management have yet to be developed. Earlier approaches to assessing the value of forest management relied on connecting site valuation with management variables. While sound, such analysis is too narrow to account for a broad range of ecosystem services. The metric fire regime condition class (FRCC) was developed from ecosystem management philosophy, but it is entirely biophysical. Its lack of economic information cripples its utility to support decision-making. We present a means of defining and assessing the deviation of a landscape from its desired fire management condition by re-framing the fire management problem as one of derived demand. This valued deviation establishes a performance metric for wildland fire management. Using a case study, we display the deviation across a landscape and sum the deviations to produce a summary metric. This summary metric is used to assess the value of alternative fire management strategies on improving the fire management condition toward its desired state. It enables us to identify which sites are most valuable to restore, even when they are in the same fire regime condition class. The case study site exemplifies how a wide range of disparate values, such as watershed, wildlife, property and timber, can be incorporated into a single landscape assessment. The analysis presented here leverages previous research on environmental capital value and non-market valuation by integrating ecosystem management, restoration, and microeconomics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Large Deviations for Stationary Probabilities of a Family of Continuous Time Markov Chains via Aubry-Mather Theory

    NASA Astrophysics Data System (ADS)

    Lopes, Artur O.; Neumann, Adriana

    2015-05-01

    In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.

  10. Common inputs in subthreshold membrane potential: The role of quiescent states in neuronal activity

    NASA Astrophysics Data System (ADS)

    Montangie, Lisandro; Montani, Fernando

    2018-06-01

    Experiments in certain regions of the cerebral cortex suggest that the spiking activity of neuronal populations is regulated by common non-Gaussian inputs across neurons. We model these deviations from random-walk processes with q -Gaussian distributions into simple threshold neurons, and investigate the scaling properties in large neural populations. We show that deviations from the Gaussian statistics provide a natural framework to regulate population statistics such as sparsity, entropy, and specific heat. This type of description allows us to provide an adequate strategy to explain the information encoding in the case of low neuronal activity and its possible implications on information transmission.

  11. A study of core Thomson scattering measurements in ITER using a multi-laser approach

    NASA Astrophysics Data System (ADS)

    Kurskiev, G. S.; Sdvizhenskii, P. A.; Bassan, M.; Andrew, P.; Bazhenov, A. N.; Bukreev, I. M.; Chernakov, P. V.; Kochergin, M. M.; Kukushkin, A. B.; Kukushkin, A. S.; Mukhin, E. E.; Razdobarin, A. G.; Samsonov, D. S.; Semenov, V. V.; Tolstyakov, S. Yu.; Kajita, S.; Masyukevich, S. V.

    2015-05-01

    The electron component is the main channel for anomalous power loss and the main indicator of transient processes in the tokamak plasma. The electron temperature and density profiles mainly determine the operational mode of the machine. This imposes demanding requirements on the precision and on the spatial and temporal resolution of the Thomson scattering (TS) measurements. Measurements of such high electron temperature with good accuracy in a large fusion device such as ITER using TS encounter a number of physical problems. The 40 keV TS spectrum has a significant blue shift. Due to the transmission functions of the fibres and to their darkening that can occur under a strong neutron irradiation, the operational wavelength range is bounded on the blue side. For example, high temperature measurements become impossible with the 1064 nm probing wavelength since the TS signal within the boundaries of the operational window weakly depends on Te. The second problem is connected with the TS calibration. The TS system for a large fusion machine like ITER will have a set of optical components inaccessible for maintenance, and their spectral characteristics may change with time. Since the present concept of the TS system for ITER relies on the classical approach to measuring the shape of the scattered spectra using wide spectral channels, the diagnostic will be very sensitive to the changes in the optical transmission. The third complication is connected with the deviation of the electron velocity distribution function from a Maxwellian that can happen under a strong ECRH/ECCD, and it may additionally hamper the measurements. This paper analyses the advantages of a ‘multi-laser approach’ implementation for the current design of the core TS system. Such an approach assumes simultaneous plasma probing with different wavelengths that allows the measurement accuracy to be improved significantly and to perform the spectral calibration of the TS system. Comparative analysis of the conservative and advanced approaches is given.

  12. Refining mass formulas for astrophysical applications: A Bayesian neural network approach

    NASA Astrophysics Data System (ADS)

    Utama, R.; Piekarewicz, J.

    2017-10-01

    Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.

  13. Prediction of Rare Transitions in Planetary Atmosphere Dynamics Between Attractors with Different Number of Zonal Jets

    NASA Astrophysics Data System (ADS)

    Bouchet, F.; Laurie, J.; Zaboronski, O.

    2012-12-01

    We describe transitions between attractors with either one, two or more zonal jets in models of turbulent atmosphere dynamics. Those transitions are extremely rare, and occur over times scales of centuries or millennia. They are extremely hard to observe in direct numerical simulations, because they require on one hand an extremely good resolution in order to simulate accurately the turbulence and on the other hand simulations performed over an extremely long time. Those conditions are usually not met together in any realistic models. However many examples of transitions between turbulent attractors in geophysical flows are known to exist (paths of the Kuroshio, Earth's magnetic field reversal, atmospheric flows, and so on). Their study through numerical computations is inaccessible using conventional means. We present an alternative approach, based on instanton theory and large deviations. Instanton theory provides a way to compute (both numerically and theoretically) extremely rare transitions between turbulent attractors. This tool, developed in field theory, and justified in some cases through the large deviation theory in mathematics, can be applied to models of turbulent atmosphere dynamics. It provides both new theoretical insights and new type of numerical algorithms. Those algorithms can predict transition histories and transition rates using numerical simulations run over only hundreds of typical model dynamical time, which is several order of magnitude lower than the typical transition time. We illustrate the power of those tools in the framework of quasi-geostrophic models. We show regimes where two or more attractors coexist. Those attractors corresponds to turbulent flows dominated by either one or more zonal jets similar to midlatitude atmosphere jets. Among the trajectories connecting two non-equilibrium attractors, we determine the most probable ones. Moreover, we also determine the transition rates, which are several of magnitude larger than a typical time determined from the jet structure. We discuss the medium-term generalization of those results to models with more complexity, like primitive equations or GCMs.

  14. TU-C-BRE-11: 3D EPID-Based in Vivo Dosimetry: A Major Step Forward Towards Optimal Quality and Safety in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mijnheer, B; Mans, A; Olaciregui-Ruiz, I

    Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is donemore » offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.« less

  15. Geochemical fingerprinting and source discrimination in soils at the continental scale

    NASA Astrophysics Data System (ADS)

    Negrel, Philippe; Sadeghi, Martiya; Ladenberger, Anna; Birke, Manfred; Reimann, Clemens

    2014-05-01

    Agricultural soil (Ap-horizon, 0-20 cm) samples were collected from a large part of Europe (33 countries, 5.6 million km2) at an average density of 1 sample site per 2500 km2. The resulting 2108 soil samples were air dried, sieved to <2 mm, milled and analysed for their major and trace element concentrations by wavelength dispersive X-ray fluorescence spectrometry (WD-XRF). The main goal of this study is to provide a global view of element mobility and source rocks at the continent scale, either by reference to crustal evolution or normalized patterns of element mobility during weathering processes. The survey area includes several sedimentary basins with different geological history, developed in different climate zones and landscapes and with different land use. In order to normalize the chemical composition of soils, mean values and standard deviation of the selected elements have been checked against values for the upper continental crust (UCC). Some elements turned out to be enriched relative to the UCC (Al, P, Zr, Pb) whereas others, like Mg, Na, Sr and Pb were depleted with regards to the variation represented by the standard deviation. The concept of UCC extended normalization patterns have been further used for the selected elements. The mean value of Rb, K, Y, Ti, Al, Si, Zr, Ce and Fe are very close to the UCC model even if standard deviation suggests slight enrichment or depletion, and Zr shows the best fit with the UCC model using both mean value and standard deviation. Lead and Cr are enriched in European soils when compared to UCC but their standard deviation values show very large variations, particularly towards very low values, which can be interpreted as a lithological effect. Element variability has been explored by looking at the variations using indicator elements. Soil data have been converted into Al-normalized enrichment factors and Na was applied as normalizing element for studying provenance source taking into account the main lithologies of the UCC. This latter normalization highlighted variations related to the soluble and insoluble behavior of some elements (K, Rb versus Ti, Al, Si, V, Y, Zr, Ba, and La, respectively), their reactivity (Fe, Mn, Zn), association with carbonates (Ca and Sr) and with phosphates (P and Ce). The maps of normalized composition revealed some problems with use of classical element ratios due to genetical differences in composition of parent material reflected, for example, in large differences in titanium content in bedrock and soil throughout the Europe.

  16. Accuracy of computer-guided surgery for dental implant placement in fully edentulous patients: A systematic review

    PubMed Central

    Marlière, Daniel Amaral Alves; Demétrio, Maurício Silva; Picinini, Leonardo Santos; De Oliveira, Rodrigo Guerra; Chaves Netto, Henrique Duque De Miranda

    2018-01-01

    Assess clinical studies regarding accuracy between virtual planning of computer-guided surgery and actual outcomes of dental implant placements in total edentulous alveolar ridges. A PubMed search was performed to identify only clinical studies published between 2011 and 2016, searching the following combinations of keywords: “Accuracy AND Computer-Assisted Surgery AND Dental Implants.” Study designs were identified using the terms: Case Reports, Clinical study, Randomized Controlled Trial, Systematic Reviews, Meta-Analysis, humans. Level of agreement between the authors in the study selection process was substantial (k = 0.767), and the study eligibility was considered excellent (k = 0.863). Seven articles were included in this review. They describe the use of bone and muco-supported guides, demonstrating angular deviations cervically and apically ranging from (minimum and maximum means), respectively, 1.85–8.4 (°), 0.17–2.17 (mm), and 0.77–2.86 (mm). Angular deviations obtained most inaccuracy in maxila. For cervical and apical deviations, accuracy was preponderantly lower in maxilla. Despite the similar deviations measurement approaches described, clinical relevance of this study may be useful to warn the surgeon that safety margins in clinical situations. PMID:29657542

  17. Note Onset Deviations as Musical Piece Signatures

    PubMed Central

    Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis

    2013-01-01

    A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields. PMID:23935971

  18. Active shape models unleashed

    NASA Astrophysics Data System (ADS)

    Kirschner, Matthias; Wesarg, Stefan

    2011-03-01

    Active Shape Models (ASMs) are a popular family of segmentation algorithms which combine local appearance models for boundary detection with a statistical shape model (SSM). They are especially popular in medical imaging due to their ability for fast and accurate segmentation of anatomical structures even in large and noisy 3D images. A well-known limitation of ASMs is that the shape constraints are over-restrictive, because the segmentations are bounded by the Principal Component Analysis (PCA) subspace learned from the training data. To overcome this limitation, we propose a new energy minimization approach which combines an external image energy with an internal shape model energy. Our shape energy uses the Distance From Feature Space (DFFS) concept to allow deviations from the PCA subspace in a theoretically sound and computationally fast way. In contrast to previous approaches, our model does not rely on post-processing with constrained free-form deformation or additional complex local energy models. In addition to the energy minimization approach, we propose a new method for liver detection, a new method for initializing an SSM and an improved k-Nearest Neighbour (kNN)-classifier for boundary detection. Our ASM is evaluated with leave-one-out tests on a data set with 34 tomographic CT scans of the liver and is compared to an ASM with standard shape constraints. The quantitative results of our experiments show that we achieve higher segmentation accuracy with our energy minimization approach than with standard shape constraints.nym

  19. Simulator evaluation of manually flown curved instrument approaches. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sager, D.

    1973-01-01

    Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than straight-in approaches; that a moderate wind does not effect curve flying performance; and that there is no performance difference between 60 deg. and 90 deg. turns. A tradeoff of curve path parameters and a paper analysis of wind compensation were also made.

  20. Experimental measurement of the orbital paths of particles sedimenting within a rotating viscous fluid as influenced by gravity

    NASA Technical Reports Server (NTRS)

    Wolf, David A.; Schwarz, Ray P.

    1992-01-01

    Measurements were taken of the path of a simulated typical tissue segment or 'particle' within a rotating fluid as a function of gravitational strength, fluid rotation rate, particle sedimentation rate, and particle initial position. Parameters were examined within the useful range for tissue culture in the NASA rotating wall culture vessels. The particle moves along a nearly circular path through the fluid (as observed from the rotating reference frame of the fluid) at the same speed as its linear terminal sedimentation speed for the external gravitational field. This gravitationally induced motion causes an increasing deviation of the particle from its original position within the fluid for a decreased rotational rate, for a more rapidly sedimenting particle, and for an increased gravitational strength. Under low gravity conditions (less than 0.1 G), the particle's motion through the fluid and its deviation from its original position become negligible. Under unit gravity conditions, large distortions (greater than 0.25 inch) occur even for particles of slow sedimentation rate (less than 1.0 cm/sec). The particle's motion is nearly independent of the particle's initial position. Comparison with mathematically predicted particle paths show that a significant error in the mathematically predicted path occurs for large particle deviations. This results from a geometric approximation and numerically accumulating error in the mathematical technique.

Top